00:00:00.000 Started by upstream project "autotest-nightly" build number 3701 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3082 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.097 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.099 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.146 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.190 Using shallow fetch with depth 1 00:00:00.190 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.190 > git --version # timeout=10 00:00:00.213 > git --version # 'git version 2.39.2' 00:00:00.213 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.214 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.214 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.161 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.172 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.184 Checking out Revision f620ee97e10840540f53609861ee9b86caa3c192 (FETCH_HEAD) 00:00:05.184 > git config core.sparsecheckout # timeout=10 00:00:05.195 > git read-tree -mu HEAD # timeout=10 00:00:05.211 > git checkout -f f620ee97e10840540f53609861ee9b86caa3c192 # timeout=5 00:00:05.228 Commit message: "change IP of vertiv1 PDU" 00:00:05.228 > git rev-list --no-walk f620ee97e10840540f53609861ee9b86caa3c192 # timeout=10 00:00:05.335 [Pipeline] Start of Pipeline 00:00:05.349 [Pipeline] library 00:00:05.350 Loading library shm_lib@master 00:00:05.350 Library shm_lib@master is cached. Copying from home. 00:00:05.363 [Pipeline] node 00:00:05.368 Running on VM-host-SM16 in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:05.370 [Pipeline] { 00:00:05.380 [Pipeline] catchError 00:00:05.382 [Pipeline] { 00:00:05.390 [Pipeline] wrap 00:00:05.396 [Pipeline] { 00:00:05.401 [Pipeline] stage 00:00:05.403 [Pipeline] { (Prologue) 00:00:05.415 [Pipeline] echo 00:00:05.416 Node: VM-host-SM16 00:00:05.420 [Pipeline] cleanWs 00:00:05.428 [WS-CLEANUP] Deleting project workspace... 00:00:05.428 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.433 [WS-CLEANUP] done 00:00:05.595 [Pipeline] setCustomBuildProperty 00:00:05.651 [Pipeline] nodesByLabel 00:00:05.652 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.660 [Pipeline] httpRequest 00:00:05.664 HttpMethod: GET 00:00:05.665 URL: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.665 Sending request to url: http://10.211.164.101/packages/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:05.681 Response Code: HTTP/1.1 200 OK 00:00:05.682 Success: Status code 200 is in the accepted range: 200,404 00:00:05.682 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:11.004 [Pipeline] sh 00:00:11.288 + tar --no-same-owner -xf jbp_f620ee97e10840540f53609861ee9b86caa3c192.tar.gz 00:00:11.307 [Pipeline] httpRequest 00:00:11.311 HttpMethod: GET 00:00:11.312 URL: http://10.211.164.101/packages/spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:11.313 Sending request to url: http://10.211.164.101/packages/spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:11.327 Response Code: HTTP/1.1 200 OK 00:00:11.328 Success: Status code 200 is in the accepted range: 200,404 00:00:11.328 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:33.346 [Pipeline] sh 00:00:33.626 + tar --no-same-owner -xf spdk_b084cba072707e2667d482bdb3443f61a33be232.tar.gz 00:00:36.917 [Pipeline] sh 00:00:37.196 + git -C spdk log --oneline -n5 00:00:37.196 b084cba07 lib/blob: fixed potential expression overflow 00:00:37.196 ccad22cf9 test: split interrupt_common.sh 00:00:37.196 d4e4841d1 nvmf/vfio-user: improve mapping failure message 00:00:37.196 3e787bba6 nvmf: initialize sgroup->queued when poll group is created 00:00:37.196 b269b0edc doc: add lvol/blob shallow copy descriptions 00:00:37.240 [Pipeline] writeFile 00:00:37.260 [Pipeline] sh 00:00:37.538 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:37.549 [Pipeline] sh 00:00:37.848 + cat autorun-spdk.conf 00:00:37.848 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:37.848 SPDK_TEST_NVMF=1 00:00:37.848 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:37.848 SPDK_TEST_VFIOUSER=1 00:00:37.848 SPDK_TEST_USDT=1 00:00:37.848 SPDK_RUN_UBSAN=1 00:00:37.848 SPDK_TEST_NVMF_MDNS=1 00:00:37.848 NET_TYPE=virt 00:00:37.848 SPDK_JSONRPC_GO_CLIENT=1 00:00:37.848 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:37.891 RUN_NIGHTLY=1 00:00:37.893 [Pipeline] } 00:00:37.908 [Pipeline] // stage 00:00:37.934 [Pipeline] stage 00:00:37.943 [Pipeline] { (Run VM) 00:00:37.984 [Pipeline] sh 00:00:38.257 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:38.257 + echo 'Start stage prepare_nvme.sh' 00:00:38.257 Start stage prepare_nvme.sh 00:00:38.257 + [[ -n 2 ]] 00:00:38.257 + disk_prefix=ex2 00:00:38.257 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 ]] 00:00:38.257 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf ]] 00:00:38.257 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf 00:00:38.257 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.257 ++ SPDK_TEST_NVMF=1 00:00:38.257 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:38.257 ++ SPDK_TEST_VFIOUSER=1 00:00:38.257 ++ SPDK_TEST_USDT=1 00:00:38.257 ++ SPDK_RUN_UBSAN=1 00:00:38.257 ++ SPDK_TEST_NVMF_MDNS=1 00:00:38.257 ++ NET_TYPE=virt 00:00:38.257 ++ SPDK_JSONRPC_GO_CLIENT=1 00:00:38.257 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.257 ++ RUN_NIGHTLY=1 00:00:38.257 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:38.257 + nvme_files=() 00:00:38.257 + declare -A nvme_files 00:00:38.257 + backend_dir=/var/lib/libvirt/images/backends 00:00:38.257 + nvme_files['nvme.img']=5G 00:00:38.257 + nvme_files['nvme-cmb.img']=5G 00:00:38.257 + nvme_files['nvme-multi0.img']=4G 00:00:38.257 + nvme_files['nvme-multi1.img']=4G 00:00:38.257 + nvme_files['nvme-multi2.img']=4G 00:00:38.257 + nvme_files['nvme-openstack.img']=8G 00:00:38.257 + nvme_files['nvme-zns.img']=5G 00:00:38.257 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:38.257 + (( SPDK_TEST_FTL == 1 )) 00:00:38.257 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:38.257 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:38.257 + for nvme in "${!nvme_files[@]}" 00:00:38.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:38.257 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.257 + for nvme in "${!nvme_files[@]}" 00:00:38.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:38.257 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.257 + for nvme in "${!nvme_files[@]}" 00:00:38.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:38.257 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:38.257 + for nvme in "${!nvme_files[@]}" 00:00:38.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:38.257 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.257 + for nvme in "${!nvme_files[@]}" 00:00:38.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:38.257 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.257 + for nvme in "${!nvme_files[@]}" 00:00:38.257 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:38.257 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.514 + for nvme in "${!nvme_files[@]}" 00:00:38.514 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:38.514 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.514 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:38.514 + echo 'End stage prepare_nvme.sh' 00:00:38.514 End stage prepare_nvme.sh 00:00:38.525 [Pipeline] sh 00:00:38.802 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:38.802 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -H -a -v -f fedora38 00:00:38.802 00:00:38.802 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant 00:00:38.802 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk 00:00:38.802 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:38.802 HELP=0 00:00:38.803 DRY_RUN=0 00:00:38.803 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img, 00:00:38.803 NVME_DISKS_TYPE=nvme,nvme, 00:00:38.803 NVME_AUTO_CREATE=0 00:00:38.803 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img, 00:00:38.803 NVME_CMB=,, 00:00:38.803 NVME_PMR=,, 00:00:38.803 NVME_ZNS=,, 00:00:38.803 NVME_MS=,, 00:00:38.803 NVME_FDP=,, 00:00:38.803 SPDK_VAGRANT_DISTRO=fedora38 00:00:38.803 SPDK_VAGRANT_VMCPU=10 00:00:38.803 SPDK_VAGRANT_VMRAM=12288 00:00:38.803 SPDK_VAGRANT_PROVIDER=libvirt 00:00:38.803 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:38.803 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:38.803 SPDK_OPENSTACK_NETWORK=0 00:00:38.803 VAGRANT_PACKAGE_BOX=0 00:00:38.803 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:38.803 FORCE_DISTRO=true 00:00:38.803 VAGRANT_BOX_VERSION= 00:00:38.803 EXTRA_VAGRANTFILES= 00:00:38.803 NIC_MODEL=e1000 00:00:38.803 00:00:38.803 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt' 00:00:38.803 /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest_2 00:00:42.097 Bringing machine 'default' up with 'libvirt' provider... 00:00:43.492 ==> default: Creating image (snapshot of base box volume). 00:00:43.492 ==> default: Creating domain with the following settings... 00:00:43.492 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1715624038_37e1b2fcd07eb6dd880e 00:00:43.492 ==> default: -- Domain type: kvm 00:00:43.492 ==> default: -- Cpus: 10 00:00:43.492 ==> default: -- Feature: acpi 00:00:43.492 ==> default: -- Feature: apic 00:00:43.492 ==> default: -- Feature: pae 00:00:43.492 ==> default: -- Memory: 12288M 00:00:43.492 ==> default: -- Memory Backing: hugepages: 00:00:43.492 ==> default: -- Management MAC: 00:00:43.492 ==> default: -- Loader: 00:00:43.492 ==> default: -- Nvram: 00:00:43.492 ==> default: -- Base box: spdk/fedora38 00:00:43.492 ==> default: -- Storage pool: default 00:00:43.492 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1715624038_37e1b2fcd07eb6dd880e.img (20G) 00:00:43.492 ==> default: -- Volume Cache: default 00:00:43.492 ==> default: -- Kernel: 00:00:43.492 ==> default: -- Initrd: 00:00:43.492 ==> default: -- Graphics Type: vnc 00:00:43.492 ==> default: -- Graphics Port: -1 00:00:43.492 ==> default: -- Graphics IP: 127.0.0.1 00:00:43.492 ==> default: -- Graphics Password: Not defined 00:00:43.492 ==> default: -- Video Type: cirrus 00:00:43.492 ==> default: -- Video VRAM: 9216 00:00:43.492 ==> default: -- Sound Type: 00:00:43.492 ==> default: -- Keymap: en-us 00:00:43.492 ==> default: -- TPM Path: 00:00:43.492 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:43.493 ==> default: -- Command line args: 00:00:43.493 ==> default: -> value=-device, 00:00:43.493 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:43.493 ==> default: -> value=-drive, 00:00:43.493 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-0-drive0, 00:00:43.493 ==> default: -> value=-device, 00:00:43.493 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.493 ==> default: -> value=-device, 00:00:43.493 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:43.493 ==> default: -> value=-drive, 00:00:43.493 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:43.493 ==> default: -> value=-device, 00:00:43.493 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.493 ==> default: -> value=-drive, 00:00:43.493 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:43.493 ==> default: -> value=-device, 00:00:43.493 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.493 ==> default: -> value=-drive, 00:00:43.493 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:43.493 ==> default: -> value=-device, 00:00:43.493 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.493 ==> default: Creating shared folders metadata... 00:00:43.493 ==> default: Starting domain. 00:00:45.392 ==> default: Waiting for domain to get an IP address... 00:01:03.479 ==> default: Waiting for SSH to become available... 00:01:04.485 ==> default: Configuring and enabling network interfaces... 00:01:09.758 default: SSH address: 192.168.121.53:22 00:01:09.758 default: SSH username: vagrant 00:01:09.758 default: SSH auth method: private key 00:01:11.660 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:20.040 ==> default: Mounting SSHFS shared folder... 00:01:20.606 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:20.606 ==> default: Checking Mount.. 00:01:21.978 ==> default: Folder Successfully Mounted! 00:01:21.978 ==> default: Running provisioner: file... 00:01:22.544 default: ~/.gitconfig => .gitconfig 00:01:23.110 00:01:23.110 SUCCESS! 00:01:23.110 00:01:23.110 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt and type "vagrant ssh" to use. 00:01:23.110 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:23.110 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt" to destroy all trace of vm. 00:01:23.110 00:01:23.118 [Pipeline] } 00:01:23.136 [Pipeline] // stage 00:01:23.144 [Pipeline] dir 00:01:23.144 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt 00:01:23.146 [Pipeline] { 00:01:23.163 [Pipeline] catchError 00:01:23.165 [Pipeline] { 00:01:23.179 [Pipeline] sh 00:01:23.461 + vagrant ssh-config --host vagrant 00:01:23.461 + sed -ne /^Host/,$p 00:01:23.461 + tee ssh_conf 00:01:27.702 Host vagrant 00:01:27.702 HostName 192.168.121.53 00:01:27.702 User vagrant 00:01:27.702 Port 22 00:01:27.702 UserKnownHostsFile /dev/null 00:01:27.702 StrictHostKeyChecking no 00:01:27.702 PasswordAuthentication no 00:01:27.702 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:01:27.702 IdentitiesOnly yes 00:01:27.702 LogLevel FATAL 00:01:27.702 ForwardAgent yes 00:01:27.702 ForwardX11 yes 00:01:27.702 00:01:27.714 [Pipeline] withEnv 00:01:27.716 [Pipeline] { 00:01:27.731 [Pipeline] sh 00:01:28.005 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:28.005 source /etc/os-release 00:01:28.005 [[ -e /image.version ]] && img=$(< /image.version) 00:01:28.005 # Minimal, systemd-like check. 00:01:28.005 if [[ -e /.dockerenv ]]; then 00:01:28.005 # Clear garbage from the node's name: 00:01:28.005 # agt-er_autotest_547-896 -> autotest_547-896 00:01:28.005 # $HOSTNAME is the actual container id 00:01:28.005 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:28.005 if mountpoint -q /etc/hostname; then 00:01:28.005 # We can assume this is a mount from a host where container is running, 00:01:28.005 # so fetch its hostname to easily identify the target swarm worker. 00:01:28.005 container="$(< /etc/hostname) ($agent)" 00:01:28.005 else 00:01:28.005 # Fallback 00:01:28.005 container=$agent 00:01:28.005 fi 00:01:28.005 fi 00:01:28.005 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:28.005 00:01:28.272 [Pipeline] } 00:01:28.292 [Pipeline] // withEnv 00:01:28.299 [Pipeline] setCustomBuildProperty 00:01:28.312 [Pipeline] stage 00:01:28.314 [Pipeline] { (Tests) 00:01:28.331 [Pipeline] sh 00:01:28.607 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:28.877 [Pipeline] timeout 00:01:28.877 Timeout set to expire in 40 min 00:01:28.879 [Pipeline] { 00:01:28.895 [Pipeline] sh 00:01:29.179 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:29.744 HEAD is now at b084cba07 lib/blob: fixed potential expression overflow 00:01:29.756 [Pipeline] sh 00:01:30.033 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:30.303 [Pipeline] sh 00:01:30.580 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:30.593 [Pipeline] sh 00:01:30.870 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:01:30.871 ++ readlink -f spdk_repo 00:01:31.129 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:31.129 + [[ -n /home/vagrant/spdk_repo ]] 00:01:31.129 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:31.129 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:31.129 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:31.129 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:31.129 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:31.129 + cd /home/vagrant/spdk_repo 00:01:31.129 + source /etc/os-release 00:01:31.129 ++ NAME='Fedora Linux' 00:01:31.129 ++ VERSION='38 (Cloud Edition)' 00:01:31.129 ++ ID=fedora 00:01:31.129 ++ VERSION_ID=38 00:01:31.129 ++ VERSION_CODENAME= 00:01:31.129 ++ PLATFORM_ID=platform:f38 00:01:31.129 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:31.129 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:31.129 ++ LOGO=fedora-logo-icon 00:01:31.129 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:31.129 ++ HOME_URL=https://fedoraproject.org/ 00:01:31.129 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:31.129 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:31.129 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:31.129 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:31.129 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:31.129 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:31.129 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:31.129 ++ SUPPORT_END=2024-05-14 00:01:31.129 ++ VARIANT='Cloud Edition' 00:01:31.129 ++ VARIANT_ID=cloud 00:01:31.129 + uname -a 00:01:31.129 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:31.129 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:31.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:31.387 Hugepages 00:01:31.387 node hugesize free / total 00:01:31.387 node0 1048576kB 0 / 0 00:01:31.387 node0 2048kB 0 / 0 00:01:31.387 00:01:31.387 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.387 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:31.645 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:31.645 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:31.645 + rm -f /tmp/spdk-ld-path 00:01:31.645 + source autorun-spdk.conf 00:01:31.645 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.645 ++ SPDK_TEST_NVMF=1 00:01:31.645 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.645 ++ SPDK_TEST_VFIOUSER=1 00:01:31.645 ++ SPDK_TEST_USDT=1 00:01:31.645 ++ SPDK_RUN_UBSAN=1 00:01:31.645 ++ SPDK_TEST_NVMF_MDNS=1 00:01:31.645 ++ NET_TYPE=virt 00:01:31.645 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:31.645 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.645 ++ RUN_NIGHTLY=1 00:01:31.645 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.645 + [[ -n '' ]] 00:01:31.645 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:31.645 + for M in /var/spdk/build-*-manifest.txt 00:01:31.645 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.645 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:31.645 + for M in /var/spdk/build-*-manifest.txt 00:01:31.645 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.645 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:31.645 ++ uname 00:01:31.645 + [[ Linux == \L\i\n\u\x ]] 00:01:31.645 + sudo dmesg -T 00:01:31.645 + sudo dmesg --clear 00:01:31.645 + dmesg_pid=5260 00:01:31.645 + sudo dmesg -Tw 00:01:31.645 + [[ Fedora Linux == FreeBSD ]] 00:01:31.645 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.645 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.645 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.645 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.645 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.645 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.645 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.645 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.645 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.645 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.645 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.645 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.645 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.645 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.645 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.645 Test configuration: 00:01:31.645 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.645 SPDK_TEST_NVMF=1 00:01:31.645 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.645 SPDK_TEST_VFIOUSER=1 00:01:31.645 SPDK_TEST_USDT=1 00:01:31.645 SPDK_RUN_UBSAN=1 00:01:31.645 SPDK_TEST_NVMF_MDNS=1 00:01:31.645 NET_TYPE=virt 00:01:31.645 SPDK_JSONRPC_GO_CLIENT=1 00:01:31.645 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.903 RUN_NIGHTLY=1 18:14:47 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:31.903 18:14:47 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.903 18:14:47 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.903 18:14:47 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.903 18:14:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.903 18:14:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.903 18:14:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.903 18:14:47 -- paths/export.sh@5 -- $ export PATH 00:01:31.903 18:14:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.903 18:14:47 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:31.903 18:14:47 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:31.903 18:14:47 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715624087.XXXXXX 00:01:31.903 18:14:47 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715624087.Z6zwiS 00:01:31.903 18:14:47 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:31.903 18:14:47 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:31.903 18:14:47 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:31.903 18:14:47 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:31.903 18:14:47 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.903 18:14:47 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:31.903 18:14:47 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:01:31.903 18:14:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.903 18:14:47 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:01:31.903 18:14:47 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:31.903 18:14:47 -- pm/common@17 -- $ local monitor 00:01:31.903 18:14:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.903 18:14:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.903 18:14:47 -- pm/common@25 -- $ sleep 1 00:01:31.903 18:14:47 -- pm/common@21 -- $ date +%s 00:01:31.903 18:14:47 -- pm/common@21 -- $ date +%s 00:01:31.903 18:14:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715624087 00:01:31.903 18:14:47 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1715624087 00:01:31.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715624087_collect-vmstat.pm.log 00:01:31.903 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1715624087_collect-cpu-load.pm.log 00:01:32.857 18:14:48 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:32.857 18:14:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.857 18:14:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.857 18:14:48 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:32.857 18:14:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:32.857 Mon May 13 06:14:48 PM UTC 2024 00:01:32.857 18:14:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:32.857 v24.05-pre-599-gb084cba07 00:01:32.857 18:14:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:32.857 18:14:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:32.857 18:14:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:32.857 18:14:48 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:01:32.857 18:14:48 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:01:32.857 18:14:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.857 ************************************ 00:01:32.857 START TEST ubsan 00:01:32.857 ************************************ 00:01:32.857 using ubsan 00:01:32.857 18:14:48 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:01:32.857 00:01:32.857 real 0m0.001s 00:01:32.857 user 0m0.000s 00:01:32.857 sys 0m0.000s 00:01:32.857 18:14:48 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:32.857 18:14:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.857 ************************************ 00:01:32.857 END TEST ubsan 00:01:32.857 ************************************ 00:01:32.857 18:14:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:32.857 18:14:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:32.857 18:14:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:32.857 18:14:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:32.857 18:14:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:32.857 18:14:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:32.857 18:14:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:32.857 18:14:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:32.857 18:14:48 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang --with-shared 00:01:33.422 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:33.422 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:33.991 Using 'verbs' RDMA provider 00:01:47.584 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:02.449 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:02.449 go version go1.21.1 linux/amd64 00:02:02.449 Creating mk/config.mk...done. 00:02:02.449 Creating mk/cc.flags.mk...done. 00:02:02.449 Type 'make' to build. 00:02:02.449 18:15:17 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:02.449 18:15:17 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:02.449 18:15:17 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:02.449 18:15:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.449 ************************************ 00:02:02.449 START TEST make 00:02:02.449 ************************************ 00:02:02.449 18:15:17 make -- common/autotest_common.sh@1121 -- $ make -j10 00:02:02.449 make[1]: Nothing to be done for 'all'. 00:02:03.016 The Meson build system 00:02:03.016 Version: 1.3.1 00:02:03.016 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:03.016 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:03.016 Build type: native build 00:02:03.016 Project name: libvfio-user 00:02:03.016 Project version: 0.0.1 00:02:03.016 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:03.016 C linker for the host machine: cc ld.bfd 2.39-16 00:02:03.016 Host machine cpu family: x86_64 00:02:03.016 Host machine cpu: x86_64 00:02:03.016 Run-time dependency threads found: YES 00:02:03.016 Library dl found: YES 00:02:03.016 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:03.016 Run-time dependency json-c found: YES 0.17 00:02:03.016 Run-time dependency cmocka found: YES 1.1.7 00:02:03.016 Program pytest-3 found: NO 00:02:03.016 Program flake8 found: NO 00:02:03.016 Program misspell-fixer found: NO 00:02:03.016 Program restructuredtext-lint found: NO 00:02:03.016 Program valgrind found: YES (/usr/bin/valgrind) 00:02:03.016 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:03.016 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:03.016 Compiler for C supports arguments -Wwrite-strings: YES 00:02:03.016 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:03.016 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:03.016 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:03.016 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:03.016 Build targets in project: 8 00:02:03.016 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:03.016 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:03.016 00:02:03.016 libvfio-user 0.0.1 00:02:03.016 00:02:03.016 User defined options 00:02:03.016 buildtype : debug 00:02:03.016 default_library: shared 00:02:03.016 libdir : /usr/local/lib 00:02:03.016 00:02:03.016 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:03.274 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:03.531 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:03.531 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:03.531 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:03.531 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:03.531 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:03.531 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:03.531 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:03.531 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:03.531 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:03.531 [10/37] Compiling C object samples/null.p/null.c.o 00:02:03.531 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:03.531 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:03.531 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:03.531 [14/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:03.531 [15/37] Compiling C object samples/client.p/client.c.o 00:02:03.789 [16/37] Compiling C object samples/server.p/server.c.o 00:02:03.789 [17/37] Linking target samples/client 00:02:03.789 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:03.789 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:03.789 [20/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:03.789 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:03.789 [22/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:03.789 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:03.789 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:03.789 [25/37] Linking target lib/libvfio-user.so.0.0.1 00:02:03.789 [26/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:03.789 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:03.789 [28/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:04.048 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:04.049 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:04.049 [31/37] Linking target test/unit_tests 00:02:04.049 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:04.049 [33/37] Linking target samples/server 00:02:04.049 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:04.049 [35/37] Linking target samples/lspci 00:02:04.049 [36/37] Linking target samples/gpio-pci-idio-16 00:02:04.049 [37/37] Linking target samples/null 00:02:04.049 INFO: autodetecting backend as ninja 00:02:04.049 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:04.307 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:04.572 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:04.572 ninja: no work to do. 00:02:14.542 The Meson build system 00:02:14.542 Version: 1.3.1 00:02:14.542 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:14.542 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:14.542 Build type: native build 00:02:14.542 Program cat found: YES (/usr/bin/cat) 00:02:14.542 Project name: DPDK 00:02:14.542 Project version: 23.11.0 00:02:14.542 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:14.542 C linker for the host machine: cc ld.bfd 2.39-16 00:02:14.542 Host machine cpu family: x86_64 00:02:14.542 Host machine cpu: x86_64 00:02:14.542 Message: ## Building in Developer Mode ## 00:02:14.542 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:14.542 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:14.542 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:14.542 Program python3 found: YES (/usr/bin/python3) 00:02:14.542 Program cat found: YES (/usr/bin/cat) 00:02:14.542 Compiler for C supports arguments -march=native: YES 00:02:14.542 Checking for size of "void *" : 8 00:02:14.542 Checking for size of "void *" : 8 (cached) 00:02:14.542 Library m found: YES 00:02:14.542 Library numa found: YES 00:02:14.542 Has header "numaif.h" : YES 00:02:14.542 Library fdt found: NO 00:02:14.542 Library execinfo found: NO 00:02:14.542 Has header "execinfo.h" : YES 00:02:14.542 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:14.542 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:14.542 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:14.542 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:14.542 Run-time dependency openssl found: YES 3.0.9 00:02:14.542 Run-time dependency libpcap found: YES 1.10.4 00:02:14.542 Has header "pcap.h" with dependency libpcap: YES 00:02:14.542 Compiler for C supports arguments -Wcast-qual: YES 00:02:14.542 Compiler for C supports arguments -Wdeprecated: YES 00:02:14.542 Compiler for C supports arguments -Wformat: YES 00:02:14.542 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:14.542 Compiler for C supports arguments -Wformat-security: NO 00:02:14.542 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:14.542 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:14.542 Compiler for C supports arguments -Wnested-externs: YES 00:02:14.542 Compiler for C supports arguments -Wold-style-definition: YES 00:02:14.542 Compiler for C supports arguments -Wpointer-arith: YES 00:02:14.542 Compiler for C supports arguments -Wsign-compare: YES 00:02:14.542 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:14.542 Compiler for C supports arguments -Wundef: YES 00:02:14.542 Compiler for C supports arguments -Wwrite-strings: YES 00:02:14.542 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:14.542 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:14.542 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:14.542 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:14.542 Program objdump found: YES (/usr/bin/objdump) 00:02:14.542 Compiler for C supports arguments -mavx512f: YES 00:02:14.542 Checking if "AVX512 checking" compiles: YES 00:02:14.542 Fetching value of define "__SSE4_2__" : 1 00:02:14.542 Fetching value of define "__AES__" : 1 00:02:14.542 Fetching value of define "__AVX__" : 1 00:02:14.542 Fetching value of define "__AVX2__" : 1 00:02:14.542 Fetching value of define "__AVX512BW__" : (undefined) 00:02:14.542 Fetching value of define "__AVX512CD__" : (undefined) 00:02:14.542 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:14.542 Fetching value of define "__AVX512F__" : (undefined) 00:02:14.542 Fetching value of define "__AVX512VL__" : (undefined) 00:02:14.542 Fetching value of define "__PCLMUL__" : 1 00:02:14.542 Fetching value of define "__RDRND__" : 1 00:02:14.542 Fetching value of define "__RDSEED__" : 1 00:02:14.542 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:14.542 Fetching value of define "__znver1__" : (undefined) 00:02:14.542 Fetching value of define "__znver2__" : (undefined) 00:02:14.542 Fetching value of define "__znver3__" : (undefined) 00:02:14.542 Fetching value of define "__znver4__" : (undefined) 00:02:14.542 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:14.542 Message: lib/log: Defining dependency "log" 00:02:14.542 Message: lib/kvargs: Defining dependency "kvargs" 00:02:14.542 Message: lib/telemetry: Defining dependency "telemetry" 00:02:14.542 Checking for function "getentropy" : NO 00:02:14.542 Message: lib/eal: Defining dependency "eal" 00:02:14.542 Message: lib/ring: Defining dependency "ring" 00:02:14.542 Message: lib/rcu: Defining dependency "rcu" 00:02:14.542 Message: lib/mempool: Defining dependency "mempool" 00:02:14.542 Message: lib/mbuf: Defining dependency "mbuf" 00:02:14.542 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:14.542 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:14.542 Compiler for C supports arguments -mpclmul: YES 00:02:14.542 Compiler for C supports arguments -maes: YES 00:02:14.542 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:14.542 Compiler for C supports arguments -mavx512bw: YES 00:02:14.542 Compiler for C supports arguments -mavx512dq: YES 00:02:14.542 Compiler for C supports arguments -mavx512vl: YES 00:02:14.542 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:14.542 Compiler for C supports arguments -mavx2: YES 00:02:14.542 Compiler for C supports arguments -mavx: YES 00:02:14.542 Message: lib/net: Defining dependency "net" 00:02:14.542 Message: lib/meter: Defining dependency "meter" 00:02:14.542 Message: lib/ethdev: Defining dependency "ethdev" 00:02:14.542 Message: lib/pci: Defining dependency "pci" 00:02:14.542 Message: lib/cmdline: Defining dependency "cmdline" 00:02:14.542 Message: lib/hash: Defining dependency "hash" 00:02:14.542 Message: lib/timer: Defining dependency "timer" 00:02:14.542 Message: lib/compressdev: Defining dependency "compressdev" 00:02:14.542 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:14.542 Message: lib/dmadev: Defining dependency "dmadev" 00:02:14.542 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:14.542 Message: lib/power: Defining dependency "power" 00:02:14.542 Message: lib/reorder: Defining dependency "reorder" 00:02:14.542 Message: lib/security: Defining dependency "security" 00:02:14.542 Has header "linux/userfaultfd.h" : YES 00:02:14.542 Has header "linux/vduse.h" : YES 00:02:14.542 Message: lib/vhost: Defining dependency "vhost" 00:02:14.542 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:14.542 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:14.542 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:14.542 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:14.542 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:14.542 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:14.542 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:14.542 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:14.542 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:14.542 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:14.543 Program doxygen found: YES (/usr/bin/doxygen) 00:02:14.543 Configuring doxy-api-html.conf using configuration 00:02:14.543 Configuring doxy-api-man.conf using configuration 00:02:14.543 Program mandb found: YES (/usr/bin/mandb) 00:02:14.543 Program sphinx-build found: NO 00:02:14.543 Configuring rte_build_config.h using configuration 00:02:14.543 Message: 00:02:14.543 ================= 00:02:14.543 Applications Enabled 00:02:14.543 ================= 00:02:14.543 00:02:14.543 apps: 00:02:14.543 00:02:14.543 00:02:14.543 Message: 00:02:14.543 ================= 00:02:14.543 Libraries Enabled 00:02:14.543 ================= 00:02:14.543 00:02:14.543 libs: 00:02:14.543 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:14.543 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:14.543 cryptodev, dmadev, power, reorder, security, vhost, 00:02:14.543 00:02:14.543 Message: 00:02:14.543 =============== 00:02:14.543 Drivers Enabled 00:02:14.543 =============== 00:02:14.543 00:02:14.543 common: 00:02:14.543 00:02:14.543 bus: 00:02:14.543 pci, vdev, 00:02:14.543 mempool: 00:02:14.543 ring, 00:02:14.543 dma: 00:02:14.543 00:02:14.543 net: 00:02:14.543 00:02:14.543 crypto: 00:02:14.543 00:02:14.543 compress: 00:02:14.543 00:02:14.543 vdpa: 00:02:14.543 00:02:14.543 00:02:14.543 Message: 00:02:14.543 ================= 00:02:14.543 Content Skipped 00:02:14.543 ================= 00:02:14.543 00:02:14.543 apps: 00:02:14.543 dumpcap: explicitly disabled via build config 00:02:14.543 graph: explicitly disabled via build config 00:02:14.543 pdump: explicitly disabled via build config 00:02:14.543 proc-info: explicitly disabled via build config 00:02:14.543 test-acl: explicitly disabled via build config 00:02:14.543 test-bbdev: explicitly disabled via build config 00:02:14.543 test-cmdline: explicitly disabled via build config 00:02:14.543 test-compress-perf: explicitly disabled via build config 00:02:14.543 test-crypto-perf: explicitly disabled via build config 00:02:14.543 test-dma-perf: explicitly disabled via build config 00:02:14.543 test-eventdev: explicitly disabled via build config 00:02:14.543 test-fib: explicitly disabled via build config 00:02:14.543 test-flow-perf: explicitly disabled via build config 00:02:14.543 test-gpudev: explicitly disabled via build config 00:02:14.543 test-mldev: explicitly disabled via build config 00:02:14.543 test-pipeline: explicitly disabled via build config 00:02:14.543 test-pmd: explicitly disabled via build config 00:02:14.543 test-regex: explicitly disabled via build config 00:02:14.543 test-sad: explicitly disabled via build config 00:02:14.543 test-security-perf: explicitly disabled via build config 00:02:14.543 00:02:14.543 libs: 00:02:14.543 metrics: explicitly disabled via build config 00:02:14.543 acl: explicitly disabled via build config 00:02:14.543 bbdev: explicitly disabled via build config 00:02:14.543 bitratestats: explicitly disabled via build config 00:02:14.543 bpf: explicitly disabled via build config 00:02:14.543 cfgfile: explicitly disabled via build config 00:02:14.543 distributor: explicitly disabled via build config 00:02:14.543 efd: explicitly disabled via build config 00:02:14.543 eventdev: explicitly disabled via build config 00:02:14.543 dispatcher: explicitly disabled via build config 00:02:14.543 gpudev: explicitly disabled via build config 00:02:14.543 gro: explicitly disabled via build config 00:02:14.543 gso: explicitly disabled via build config 00:02:14.543 ip_frag: explicitly disabled via build config 00:02:14.543 jobstats: explicitly disabled via build config 00:02:14.543 latencystats: explicitly disabled via build config 00:02:14.543 lpm: explicitly disabled via build config 00:02:14.543 member: explicitly disabled via build config 00:02:14.543 pcapng: explicitly disabled via build config 00:02:14.543 rawdev: explicitly disabled via build config 00:02:14.543 regexdev: explicitly disabled via build config 00:02:14.543 mldev: explicitly disabled via build config 00:02:14.543 rib: explicitly disabled via build config 00:02:14.543 sched: explicitly disabled via build config 00:02:14.543 stack: explicitly disabled via build config 00:02:14.543 ipsec: explicitly disabled via build config 00:02:14.543 pdcp: explicitly disabled via build config 00:02:14.543 fib: explicitly disabled via build config 00:02:14.543 port: explicitly disabled via build config 00:02:14.543 pdump: explicitly disabled via build config 00:02:14.543 table: explicitly disabled via build config 00:02:14.543 pipeline: explicitly disabled via build config 00:02:14.543 graph: explicitly disabled via build config 00:02:14.543 node: explicitly disabled via build config 00:02:14.543 00:02:14.543 drivers: 00:02:14.543 common/cpt: not in enabled drivers build config 00:02:14.543 common/dpaax: not in enabled drivers build config 00:02:14.543 common/iavf: not in enabled drivers build config 00:02:14.543 common/idpf: not in enabled drivers build config 00:02:14.543 common/mvep: not in enabled drivers build config 00:02:14.543 common/octeontx: not in enabled drivers build config 00:02:14.543 bus/auxiliary: not in enabled drivers build config 00:02:14.543 bus/cdx: not in enabled drivers build config 00:02:14.543 bus/dpaa: not in enabled drivers build config 00:02:14.543 bus/fslmc: not in enabled drivers build config 00:02:14.543 bus/ifpga: not in enabled drivers build config 00:02:14.543 bus/platform: not in enabled drivers build config 00:02:14.543 bus/vmbus: not in enabled drivers build config 00:02:14.543 common/cnxk: not in enabled drivers build config 00:02:14.543 common/mlx5: not in enabled drivers build config 00:02:14.543 common/nfp: not in enabled drivers build config 00:02:14.543 common/qat: not in enabled drivers build config 00:02:14.543 common/sfc_efx: not in enabled drivers build config 00:02:14.543 mempool/bucket: not in enabled drivers build config 00:02:14.543 mempool/cnxk: not in enabled drivers build config 00:02:14.543 mempool/dpaa: not in enabled drivers build config 00:02:14.543 mempool/dpaa2: not in enabled drivers build config 00:02:14.543 mempool/octeontx: not in enabled drivers build config 00:02:14.543 mempool/stack: not in enabled drivers build config 00:02:14.543 dma/cnxk: not in enabled drivers build config 00:02:14.543 dma/dpaa: not in enabled drivers build config 00:02:14.543 dma/dpaa2: not in enabled drivers build config 00:02:14.543 dma/hisilicon: not in enabled drivers build config 00:02:14.543 dma/idxd: not in enabled drivers build config 00:02:14.543 dma/ioat: not in enabled drivers build config 00:02:14.543 dma/skeleton: not in enabled drivers build config 00:02:14.543 net/af_packet: not in enabled drivers build config 00:02:14.543 net/af_xdp: not in enabled drivers build config 00:02:14.543 net/ark: not in enabled drivers build config 00:02:14.543 net/atlantic: not in enabled drivers build config 00:02:14.543 net/avp: not in enabled drivers build config 00:02:14.543 net/axgbe: not in enabled drivers build config 00:02:14.543 net/bnx2x: not in enabled drivers build config 00:02:14.543 net/bnxt: not in enabled drivers build config 00:02:14.543 net/bonding: not in enabled drivers build config 00:02:14.543 net/cnxk: not in enabled drivers build config 00:02:14.543 net/cpfl: not in enabled drivers build config 00:02:14.543 net/cxgbe: not in enabled drivers build config 00:02:14.543 net/dpaa: not in enabled drivers build config 00:02:14.543 net/dpaa2: not in enabled drivers build config 00:02:14.543 net/e1000: not in enabled drivers build config 00:02:14.543 net/ena: not in enabled drivers build config 00:02:14.543 net/enetc: not in enabled drivers build config 00:02:14.543 net/enetfec: not in enabled drivers build config 00:02:14.543 net/enic: not in enabled drivers build config 00:02:14.543 net/failsafe: not in enabled drivers build config 00:02:14.544 net/fm10k: not in enabled drivers build config 00:02:14.544 net/gve: not in enabled drivers build config 00:02:14.544 net/hinic: not in enabled drivers build config 00:02:14.544 net/hns3: not in enabled drivers build config 00:02:14.544 net/i40e: not in enabled drivers build config 00:02:14.544 net/iavf: not in enabled drivers build config 00:02:14.544 net/ice: not in enabled drivers build config 00:02:14.544 net/idpf: not in enabled drivers build config 00:02:14.544 net/igc: not in enabled drivers build config 00:02:14.544 net/ionic: not in enabled drivers build config 00:02:14.544 net/ipn3ke: not in enabled drivers build config 00:02:14.544 net/ixgbe: not in enabled drivers build config 00:02:14.544 net/mana: not in enabled drivers build config 00:02:14.544 net/memif: not in enabled drivers build config 00:02:14.544 net/mlx4: not in enabled drivers build config 00:02:14.544 net/mlx5: not in enabled drivers build config 00:02:14.544 net/mvneta: not in enabled drivers build config 00:02:14.544 net/mvpp2: not in enabled drivers build config 00:02:14.544 net/netvsc: not in enabled drivers build config 00:02:14.544 net/nfb: not in enabled drivers build config 00:02:14.544 net/nfp: not in enabled drivers build config 00:02:14.544 net/ngbe: not in enabled drivers build config 00:02:14.544 net/null: not in enabled drivers build config 00:02:14.544 net/octeontx: not in enabled drivers build config 00:02:14.544 net/octeon_ep: not in enabled drivers build config 00:02:14.544 net/pcap: not in enabled drivers build config 00:02:14.544 net/pfe: not in enabled drivers build config 00:02:14.544 net/qede: not in enabled drivers build config 00:02:14.544 net/ring: not in enabled drivers build config 00:02:14.544 net/sfc: not in enabled drivers build config 00:02:14.544 net/softnic: not in enabled drivers build config 00:02:14.544 net/tap: not in enabled drivers build config 00:02:14.544 net/thunderx: not in enabled drivers build config 00:02:14.544 net/txgbe: not in enabled drivers build config 00:02:14.544 net/vdev_netvsc: not in enabled drivers build config 00:02:14.544 net/vhost: not in enabled drivers build config 00:02:14.544 net/virtio: not in enabled drivers build config 00:02:14.544 net/vmxnet3: not in enabled drivers build config 00:02:14.544 raw/*: missing internal dependency, "rawdev" 00:02:14.544 crypto/armv8: not in enabled drivers build config 00:02:14.544 crypto/bcmfs: not in enabled drivers build config 00:02:14.544 crypto/caam_jr: not in enabled drivers build config 00:02:14.544 crypto/ccp: not in enabled drivers build config 00:02:14.544 crypto/cnxk: not in enabled drivers build config 00:02:14.544 crypto/dpaa_sec: not in enabled drivers build config 00:02:14.544 crypto/dpaa2_sec: not in enabled drivers build config 00:02:14.544 crypto/ipsec_mb: not in enabled drivers build config 00:02:14.544 crypto/mlx5: not in enabled drivers build config 00:02:14.544 crypto/mvsam: not in enabled drivers build config 00:02:14.544 crypto/nitrox: not in enabled drivers build config 00:02:14.544 crypto/null: not in enabled drivers build config 00:02:14.544 crypto/octeontx: not in enabled drivers build config 00:02:14.544 crypto/openssl: not in enabled drivers build config 00:02:14.544 crypto/scheduler: not in enabled drivers build config 00:02:14.544 crypto/uadk: not in enabled drivers build config 00:02:14.544 crypto/virtio: not in enabled drivers build config 00:02:14.544 compress/isal: not in enabled drivers build config 00:02:14.544 compress/mlx5: not in enabled drivers build config 00:02:14.544 compress/octeontx: not in enabled drivers build config 00:02:14.544 compress/zlib: not in enabled drivers build config 00:02:14.544 regex/*: missing internal dependency, "regexdev" 00:02:14.544 ml/*: missing internal dependency, "mldev" 00:02:14.544 vdpa/ifc: not in enabled drivers build config 00:02:14.544 vdpa/mlx5: not in enabled drivers build config 00:02:14.544 vdpa/nfp: not in enabled drivers build config 00:02:14.544 vdpa/sfc: not in enabled drivers build config 00:02:14.544 event/*: missing internal dependency, "eventdev" 00:02:14.544 baseband/*: missing internal dependency, "bbdev" 00:02:14.544 gpu/*: missing internal dependency, "gpudev" 00:02:14.544 00:02:14.544 00:02:14.544 Build targets in project: 85 00:02:14.544 00:02:14.544 DPDK 23.11.0 00:02:14.544 00:02:14.544 User defined options 00:02:14.544 buildtype : debug 00:02:14.544 default_library : shared 00:02:14.544 libdir : lib 00:02:14.544 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:14.544 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:14.544 c_link_args : 00:02:14.544 cpu_instruction_set: native 00:02:14.544 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:14.544 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:14.544 enable_docs : false 00:02:14.544 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:14.544 enable_kmods : false 00:02:14.544 tests : false 00:02:14.544 00:02:14.544 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:14.544 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:14.544 [1/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:14.544 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:14.544 [3/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:14.544 [4/265] Linking static target lib/librte_kvargs.a 00:02:14.544 [5/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:14.544 [6/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:14.544 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:14.544 [8/265] Linking static target lib/librte_log.a 00:02:14.544 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:14.544 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:15.112 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.112 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:15.370 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:15.370 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:15.370 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:15.370 [16/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:15.370 [17/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.370 [18/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:15.370 [19/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.370 [20/265] Linking target lib/librte_log.so.24.0 00:02:15.370 [21/265] Linking static target lib/librte_telemetry.a 00:02:15.629 [22/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:15.629 [23/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:15.629 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:15.629 [25/265] Linking target lib/librte_kvargs.so.24.0 00:02:15.887 [26/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:16.145 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.145 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.145 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.145 [30/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.145 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.481 [32/265] Linking target lib/librte_telemetry.so.24.0 00:02:16.481 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.481 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.481 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.481 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:16.737 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.737 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.737 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.737 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.737 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.737 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.737 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.996 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.996 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.254 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:17.254 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.512 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:17.512 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:17.512 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:17.770 [51/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:17.770 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:17.770 [53/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:17.770 [54/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.028 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.028 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.028 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.287 [58/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.287 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.287 [60/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.287 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.287 [62/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.287 [63/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.543 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.543 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.800 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:18.800 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.800 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.058 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.058 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.058 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.316 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:19.316 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.316 [74/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.316 [75/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.316 [76/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.316 [77/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.573 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.573 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.573 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.831 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:20.088 [82/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:20.088 [83/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:20.088 [84/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:20.088 [85/265] Linking static target lib/librte_eal.a 00:02:20.346 [86/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:20.346 [87/265] Linking static target lib/librte_ring.a 00:02:20.346 [88/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.346 [89/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.604 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.604 [91/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.604 [92/265] Linking static target lib/librte_rcu.a 00:02:20.604 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.604 [94/265] Linking static target lib/librte_mempool.a 00:02:20.861 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.861 [96/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.119 [97/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.119 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.119 [99/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.119 [100/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.119 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.377 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.635 [103/265] Linking static target lib/librte_mbuf.a 00:02:21.636 [104/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.636 [105/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.893 [106/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.893 [107/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.893 [108/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.893 [109/265] Linking static target lib/librte_meter.a 00:02:21.893 [110/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.893 [111/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:22.161 [112/265] Linking static target lib/librte_net.a 00:02:22.436 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.436 [114/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.436 [115/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.694 [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.694 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.951 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.951 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:23.516 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:23.516 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:23.516 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.773 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:23.773 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:23.773 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.773 [126/265] Linking static target lib/librte_pci.a 00:02:23.773 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.031 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.031 [129/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.031 [130/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.031 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.031 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.031 [133/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.289 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.289 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.289 [136/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.289 [137/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.289 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.289 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.289 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.289 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.289 [142/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.289 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.547 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:24.547 [145/265] Linking static target lib/librte_ethdev.a 00:02:24.547 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.547 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.547 [148/265] Linking static target lib/librte_cmdline.a 00:02:25.112 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:25.112 [150/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:25.112 [151/265] Linking static target lib/librte_timer.a 00:02:25.112 [152/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.112 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:25.369 [154/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.369 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.369 [156/265] Linking static target lib/librte_hash.a 00:02:25.674 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.674 [158/265] Linking static target lib/librte_compressdev.a 00:02:25.674 [159/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:25.674 [160/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.674 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.932 [162/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.932 [163/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:26.190 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:26.190 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:26.190 [166/265] Linking static target lib/librte_dmadev.a 00:02:26.449 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:26.449 [168/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.449 [169/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.449 [170/265] Linking static target lib/librte_cryptodev.a 00:02:26.449 [171/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:26.449 [172/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.449 [173/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.449 [174/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:26.707 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:26.965 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.965 [177/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:26.965 [178/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:26.965 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:27.222 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:27.222 [181/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:27.222 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:27.222 [183/265] Linking static target lib/librte_power.a 00:02:27.787 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:27.787 [185/265] Linking static target lib/librte_reorder.a 00:02:27.787 [186/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:27.787 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:27.787 [188/265] Linking static target lib/librte_security.a 00:02:27.787 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:28.044 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:28.044 [191/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:28.301 [192/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.558 [193/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.558 [194/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.558 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:28.815 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:28.815 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:28.815 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.072 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:29.072 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:29.330 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:29.330 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:29.330 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:29.587 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:29.587 [205/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:29.587 [206/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:29.587 [207/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:29.587 [208/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:29.587 [209/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:29.845 [210/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:29.845 [211/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:29.845 [212/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.845 [213/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:29.845 [214/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.845 [215/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:29.845 [216/265] Linking static target drivers/librte_bus_pci.a 00:02:29.845 [217/265] Linking static target drivers/librte_bus_vdev.a 00:02:29.845 [218/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.103 [219/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.103 [220/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.103 [221/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:30.103 [222/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.103 [223/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.103 [224/265] Linking static target drivers/librte_mempool_ring.a 00:02:30.360 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.926 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.184 [227/265] Linking static target lib/librte_vhost.a 00:02:31.752 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.752 [229/265] Linking target lib/librte_eal.so.24.0 00:02:32.009 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:32.009 [231/265] Linking target lib/librte_ring.so.24.0 00:02:32.009 [232/265] Linking target lib/librte_timer.so.24.0 00:02:32.009 [233/265] Linking target drivers/librte_bus_vdev.so.24.0 00:02:32.009 [234/265] Linking target lib/librte_dmadev.so.24.0 00:02:32.009 [235/265] Linking target lib/librte_pci.so.24.0 00:02:32.009 [236/265] Linking target lib/librte_meter.so.24.0 00:02:32.009 [237/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:32.009 [238/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:32.009 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:32.009 [240/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:32.267 [241/265] Linking target lib/librte_mempool.so.24.0 00:02:32.267 [242/265] Linking target lib/librte_rcu.so.24.0 00:02:32.267 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:02:32.267 [244/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:32.267 [245/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.267 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:32.267 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:32.267 [248/265] Linking target drivers/librte_mempool_ring.so.24.0 00:02:32.267 [249/265] Linking target lib/librte_mbuf.so.24.0 00:02:32.524 [250/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.524 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:32.524 [252/265] Linking target lib/librte_reorder.so.24.0 00:02:32.524 [253/265] Linking target lib/librte_net.so.24.0 00:02:32.524 [254/265] Linking target lib/librte_compressdev.so.24.0 00:02:32.524 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:02:32.782 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:32.782 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:32.782 [258/265] Linking target lib/librte_hash.so.24.0 00:02:32.782 [259/265] Linking target lib/librte_security.so.24.0 00:02:32.782 [260/265] Linking target lib/librte_cmdline.so.24.0 00:02:32.782 [261/265] Linking target lib/librte_ethdev.so.24.0 00:02:33.039 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:33.039 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:33.039 [264/265] Linking target lib/librte_power.so.24.0 00:02:33.039 [265/265] Linking target lib/librte_vhost.so.24.0 00:02:33.039 INFO: autodetecting backend as ninja 00:02:33.039 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:34.413 CC lib/ut_mock/mock.o 00:02:34.413 CC lib/log/log.o 00:02:34.413 CC lib/log/log_flags.o 00:02:34.413 CC lib/log/log_deprecated.o 00:02:34.413 CC lib/ut/ut.o 00:02:34.413 LIB libspdk_ut_mock.a 00:02:34.413 LIB libspdk_ut.a 00:02:34.413 SO libspdk_ut_mock.so.6.0 00:02:34.413 LIB libspdk_log.a 00:02:34.413 SO libspdk_ut.so.2.0 00:02:34.413 SO libspdk_log.so.7.0 00:02:34.413 SYMLINK libspdk_ut_mock.so 00:02:34.413 SYMLINK libspdk_ut.so 00:02:34.699 SYMLINK libspdk_log.so 00:02:34.699 CXX lib/trace_parser/trace.o 00:02:34.699 CC lib/ioat/ioat.o 00:02:34.699 CC lib/dma/dma.o 00:02:34.699 CC lib/util/base64.o 00:02:34.699 CC lib/util/cpuset.o 00:02:34.699 CC lib/util/bit_array.o 00:02:34.699 CC lib/util/crc16.o 00:02:34.699 CC lib/util/crc32.o 00:02:34.699 CC lib/util/crc32c.o 00:02:34.956 CC lib/vfio_user/host/vfio_user_pci.o 00:02:34.956 CC lib/util/crc32_ieee.o 00:02:34.956 CC lib/vfio_user/host/vfio_user.o 00:02:34.956 CC lib/util/crc64.o 00:02:34.956 LIB libspdk_dma.a 00:02:34.956 SO libspdk_dma.so.4.0 00:02:34.956 CC lib/util/dif.o 00:02:34.956 LIB libspdk_ioat.a 00:02:34.956 SYMLINK libspdk_dma.so 00:02:34.956 CC lib/util/fd.o 00:02:34.956 SO libspdk_ioat.so.7.0 00:02:34.956 CC lib/util/file.o 00:02:34.956 CC lib/util/hexlify.o 00:02:34.956 CC lib/util/iov.o 00:02:35.215 CC lib/util/math.o 00:02:35.215 SYMLINK libspdk_ioat.so 00:02:35.215 CC lib/util/pipe.o 00:02:35.215 CC lib/util/strerror_tls.o 00:02:35.215 CC lib/util/string.o 00:02:35.215 LIB libspdk_vfio_user.a 00:02:35.215 SO libspdk_vfio_user.so.5.0 00:02:35.215 CC lib/util/uuid.o 00:02:35.215 CC lib/util/fd_group.o 00:02:35.215 CC lib/util/xor.o 00:02:35.215 CC lib/util/zipf.o 00:02:35.215 SYMLINK libspdk_vfio_user.so 00:02:35.472 LIB libspdk_util.a 00:02:35.729 SO libspdk_util.so.9.0 00:02:35.729 LIB libspdk_trace_parser.a 00:02:35.729 SO libspdk_trace_parser.so.5.0 00:02:35.729 SYMLINK libspdk_util.so 00:02:35.986 SYMLINK libspdk_trace_parser.so 00:02:35.986 CC lib/conf/conf.o 00:02:35.986 CC lib/idxd/idxd.o 00:02:35.986 CC lib/idxd/idxd_user.o 00:02:35.986 CC lib/env_dpdk/env.o 00:02:35.986 CC lib/json/json_parse.o 00:02:35.986 CC lib/json/json_util.o 00:02:35.986 CC lib/env_dpdk/memory.o 00:02:35.986 CC lib/json/json_write.o 00:02:35.986 CC lib/rdma/common.o 00:02:35.986 CC lib/vmd/vmd.o 00:02:36.244 CC lib/vmd/led.o 00:02:36.244 CC lib/rdma/rdma_verbs.o 00:02:36.244 LIB libspdk_conf.a 00:02:36.244 SO libspdk_conf.so.6.0 00:02:36.244 CC lib/env_dpdk/pci.o 00:02:36.244 CC lib/env_dpdk/init.o 00:02:36.244 LIB libspdk_json.a 00:02:36.244 SYMLINK libspdk_conf.so 00:02:36.244 CC lib/env_dpdk/threads.o 00:02:36.244 SO libspdk_json.so.6.0 00:02:36.501 CC lib/env_dpdk/pci_ioat.o 00:02:36.501 SYMLINK libspdk_json.so 00:02:36.501 CC lib/env_dpdk/pci_virtio.o 00:02:36.501 LIB libspdk_rdma.a 00:02:36.501 SO libspdk_rdma.so.6.0 00:02:36.501 LIB libspdk_idxd.a 00:02:36.501 SYMLINK libspdk_rdma.so 00:02:36.501 SO libspdk_idxd.so.12.0 00:02:36.501 CC lib/env_dpdk/pci_vmd.o 00:02:36.501 CC lib/env_dpdk/pci_idxd.o 00:02:36.501 SYMLINK libspdk_idxd.so 00:02:36.501 CC lib/env_dpdk/pci_event.o 00:02:36.501 LIB libspdk_vmd.a 00:02:36.501 CC lib/env_dpdk/sigbus_handler.o 00:02:36.501 CC lib/env_dpdk/pci_dpdk.o 00:02:36.758 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:36.758 SO libspdk_vmd.so.6.0 00:02:36.758 CC lib/jsonrpc/jsonrpc_server.o 00:02:36.758 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:36.758 CC lib/jsonrpc/jsonrpc_client.o 00:02:36.758 SYMLINK libspdk_vmd.so 00:02:36.758 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:36.758 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:37.016 LIB libspdk_jsonrpc.a 00:02:37.016 SO libspdk_jsonrpc.so.6.0 00:02:37.016 SYMLINK libspdk_jsonrpc.so 00:02:37.275 CC lib/rpc/rpc.o 00:02:37.275 LIB libspdk_env_dpdk.a 00:02:37.533 SO libspdk_env_dpdk.so.14.0 00:02:37.533 LIB libspdk_rpc.a 00:02:37.533 SO libspdk_rpc.so.6.0 00:02:37.533 SYMLINK libspdk_rpc.so 00:02:37.791 SYMLINK libspdk_env_dpdk.so 00:02:37.791 CC lib/notify/notify.o 00:02:37.791 CC lib/notify/notify_rpc.o 00:02:37.791 CC lib/keyring/keyring.o 00:02:37.791 CC lib/keyring/keyring_rpc.o 00:02:37.791 CC lib/trace/trace_flags.o 00:02:37.791 CC lib/trace/trace.o 00:02:37.791 CC lib/trace/trace_rpc.o 00:02:38.068 LIB libspdk_notify.a 00:02:38.068 LIB libspdk_keyring.a 00:02:38.068 SO libspdk_notify.so.6.0 00:02:38.068 SO libspdk_keyring.so.1.0 00:02:38.068 LIB libspdk_trace.a 00:02:38.068 SYMLINK libspdk_notify.so 00:02:38.325 SYMLINK libspdk_keyring.so 00:02:38.325 SO libspdk_trace.so.10.0 00:02:38.325 SYMLINK libspdk_trace.so 00:02:38.580 CC lib/thread/thread.o 00:02:38.580 CC lib/thread/iobuf.o 00:02:38.580 CC lib/sock/sock.o 00:02:38.580 CC lib/sock/sock_rpc.o 00:02:39.146 LIB libspdk_sock.a 00:02:39.146 SO libspdk_sock.so.9.0 00:02:39.146 SYMLINK libspdk_sock.so 00:02:39.404 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:39.404 CC lib/nvme/nvme_ctrlr.o 00:02:39.404 CC lib/nvme/nvme_fabric.o 00:02:39.404 CC lib/nvme/nvme_ns_cmd.o 00:02:39.404 CC lib/nvme/nvme_pcie_common.o 00:02:39.404 CC lib/nvme/nvme_ns.o 00:02:39.404 CC lib/nvme/nvme_pcie.o 00:02:39.404 CC lib/nvme/nvme.o 00:02:39.404 CC lib/nvme/nvme_qpair.o 00:02:39.972 LIB libspdk_thread.a 00:02:39.972 SO libspdk_thread.so.10.0 00:02:40.230 SYMLINK libspdk_thread.so 00:02:40.230 CC lib/nvme/nvme_quirks.o 00:02:40.230 CC lib/nvme/nvme_transport.o 00:02:40.230 CC lib/nvme/nvme_discovery.o 00:02:40.230 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:40.230 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:40.230 CC lib/nvme/nvme_tcp.o 00:02:40.230 CC lib/nvme/nvme_opal.o 00:02:40.488 CC lib/nvme/nvme_io_msg.o 00:02:40.746 CC lib/nvme/nvme_poll_group.o 00:02:40.746 CC lib/nvme/nvme_zns.o 00:02:40.746 CC lib/nvme/nvme_stubs.o 00:02:41.004 CC lib/nvme/nvme_auth.o 00:02:41.004 CC lib/nvme/nvme_cuse.o 00:02:41.004 CC lib/accel/accel.o 00:02:41.004 CC lib/nvme/nvme_vfio_user.o 00:02:41.004 CC lib/accel/accel_rpc.o 00:02:41.262 CC lib/accel/accel_sw.o 00:02:41.520 CC lib/nvme/nvme_rdma.o 00:02:41.520 CC lib/blob/blobstore.o 00:02:41.778 CC lib/init/json_config.o 00:02:41.778 CC lib/virtio/virtio.o 00:02:42.036 CC lib/virtio/virtio_vhost_user.o 00:02:42.036 CC lib/blob/request.o 00:02:42.036 CC lib/blob/zeroes.o 00:02:42.036 CC lib/vfu_tgt/tgt_endpoint.o 00:02:42.036 CC lib/virtio/virtio_vfio_user.o 00:02:42.036 CC lib/init/subsystem.o 00:02:42.293 LIB libspdk_accel.a 00:02:42.293 CC lib/virtio/virtio_pci.o 00:02:42.293 CC lib/init/subsystem_rpc.o 00:02:42.293 SO libspdk_accel.so.15.0 00:02:42.293 CC lib/init/rpc.o 00:02:42.293 CC lib/vfu_tgt/tgt_rpc.o 00:02:42.293 CC lib/blob/blob_bs_dev.o 00:02:42.293 SYMLINK libspdk_accel.so 00:02:42.552 LIB libspdk_init.a 00:02:42.552 LIB libspdk_vfu_tgt.a 00:02:42.552 LIB libspdk_virtio.a 00:02:42.552 SO libspdk_init.so.5.0 00:02:42.552 CC lib/bdev/bdev.o 00:02:42.552 SO libspdk_vfu_tgt.so.3.0 00:02:42.552 CC lib/bdev/bdev_rpc.o 00:02:42.552 CC lib/bdev/bdev_zone.o 00:02:42.552 SO libspdk_virtio.so.7.0 00:02:42.552 CC lib/bdev/part.o 00:02:42.552 CC lib/bdev/scsi_nvme.o 00:02:42.552 SYMLINK libspdk_vfu_tgt.so 00:02:42.552 SYMLINK libspdk_init.so 00:02:42.552 SYMLINK libspdk_virtio.so 00:02:42.810 CC lib/event/app.o 00:02:42.810 CC lib/event/log_rpc.o 00:02:42.810 CC lib/event/reactor.o 00:02:42.810 CC lib/event/app_rpc.o 00:02:42.810 CC lib/event/scheduler_static.o 00:02:42.810 LIB libspdk_nvme.a 00:02:43.068 SO libspdk_nvme.so.13.0 00:02:43.327 LIB libspdk_event.a 00:02:43.327 SO libspdk_event.so.13.0 00:02:43.327 SYMLINK libspdk_event.so 00:02:43.603 SYMLINK libspdk_nvme.so 00:02:44.536 LIB libspdk_blob.a 00:02:44.794 SO libspdk_blob.so.11.0 00:02:44.794 SYMLINK libspdk_blob.so 00:02:45.052 CC lib/lvol/lvol.o 00:02:45.052 CC lib/blobfs/blobfs.o 00:02:45.052 CC lib/blobfs/tree.o 00:02:45.309 LIB libspdk_bdev.a 00:02:45.309 SO libspdk_bdev.so.15.0 00:02:45.309 SYMLINK libspdk_bdev.so 00:02:45.567 CC lib/nbd/nbd.o 00:02:45.567 CC lib/nbd/nbd_rpc.o 00:02:45.567 CC lib/nvmf/ctrlr.o 00:02:45.567 CC lib/nvmf/ctrlr_discovery.o 00:02:45.567 CC lib/nvmf/ctrlr_bdev.o 00:02:45.567 CC lib/ftl/ftl_core.o 00:02:45.567 CC lib/ublk/ublk.o 00:02:45.567 CC lib/scsi/dev.o 00:02:45.824 CC lib/scsi/lun.o 00:02:45.824 CC lib/nvmf/subsystem.o 00:02:45.824 LIB libspdk_blobfs.a 00:02:45.824 LIB libspdk_lvol.a 00:02:46.083 SO libspdk_blobfs.so.10.0 00:02:46.083 SO libspdk_lvol.so.10.0 00:02:46.083 CC lib/ftl/ftl_init.o 00:02:46.083 LIB libspdk_nbd.a 00:02:46.083 SYMLINK libspdk_blobfs.so 00:02:46.083 SYMLINK libspdk_lvol.so 00:02:46.083 CC lib/ftl/ftl_layout.o 00:02:46.083 CC lib/ftl/ftl_debug.o 00:02:46.083 SO libspdk_nbd.so.7.0 00:02:46.083 CC lib/scsi/port.o 00:02:46.083 SYMLINK libspdk_nbd.so 00:02:46.083 CC lib/scsi/scsi.o 00:02:46.083 CC lib/scsi/scsi_bdev.o 00:02:46.356 CC lib/scsi/scsi_pr.o 00:02:46.357 CC lib/scsi/scsi_rpc.o 00:02:46.357 CC lib/nvmf/nvmf.o 00:02:46.357 CC lib/ublk/ublk_rpc.o 00:02:46.357 CC lib/scsi/task.o 00:02:46.357 CC lib/nvmf/nvmf_rpc.o 00:02:46.357 CC lib/ftl/ftl_io.o 00:02:46.357 CC lib/ftl/ftl_sb.o 00:02:46.647 LIB libspdk_ublk.a 00:02:46.647 SO libspdk_ublk.so.3.0 00:02:46.647 CC lib/ftl/ftl_l2p.o 00:02:46.647 CC lib/ftl/ftl_l2p_flat.o 00:02:46.647 SYMLINK libspdk_ublk.so 00:02:46.647 CC lib/ftl/ftl_nv_cache.o 00:02:46.647 CC lib/ftl/ftl_band.o 00:02:46.647 CC lib/ftl/ftl_band_ops.o 00:02:46.647 CC lib/ftl/ftl_writer.o 00:02:46.904 CC lib/ftl/ftl_rq.o 00:02:46.904 LIB libspdk_scsi.a 00:02:46.904 SO libspdk_scsi.so.9.0 00:02:46.904 CC lib/ftl/ftl_reloc.o 00:02:46.904 CC lib/ftl/ftl_l2p_cache.o 00:02:46.904 SYMLINK libspdk_scsi.so 00:02:46.904 CC lib/ftl/ftl_p2l.o 00:02:47.162 CC lib/nvmf/transport.o 00:02:47.162 CC lib/ftl/mngt/ftl_mngt.o 00:02:47.162 CC lib/iscsi/conn.o 00:02:47.162 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:47.162 CC lib/vhost/vhost.o 00:02:47.419 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:47.419 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:47.419 CC lib/vhost/vhost_rpc.o 00:02:47.419 CC lib/nvmf/tcp.o 00:02:47.419 CC lib/nvmf/stubs.o 00:02:47.419 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:47.419 CC lib/vhost/vhost_scsi.o 00:02:47.419 CC lib/vhost/vhost_blk.o 00:02:47.676 CC lib/vhost/rte_vhost_user.o 00:02:47.676 CC lib/nvmf/vfio_user.o 00:02:47.676 CC lib/iscsi/init_grp.o 00:02:47.958 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:47.958 CC lib/nvmf/rdma.o 00:02:47.958 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:47.958 CC lib/iscsi/iscsi.o 00:02:48.216 CC lib/nvmf/auth.o 00:02:48.216 CC lib/iscsi/md5.o 00:02:48.216 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:48.216 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:48.473 CC lib/iscsi/param.o 00:02:48.473 CC lib/iscsi/portal_grp.o 00:02:48.473 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:48.473 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:48.473 LIB libspdk_vhost.a 00:02:48.731 CC lib/iscsi/tgt_node.o 00:02:48.731 SO libspdk_vhost.so.8.0 00:02:48.731 CC lib/iscsi/iscsi_subsystem.o 00:02:48.731 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:48.731 CC lib/iscsi/iscsi_rpc.o 00:02:48.731 SYMLINK libspdk_vhost.so 00:02:48.731 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:48.731 CC lib/iscsi/task.o 00:02:48.988 CC lib/ftl/utils/ftl_conf.o 00:02:48.988 CC lib/ftl/utils/ftl_md.o 00:02:48.988 CC lib/ftl/utils/ftl_mempool.o 00:02:48.988 CC lib/ftl/utils/ftl_bitmap.o 00:02:49.246 CC lib/ftl/utils/ftl_property.o 00:02:49.246 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:49.246 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:49.246 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:49.246 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:49.246 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:49.246 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:49.504 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:49.504 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:49.504 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:49.504 LIB libspdk_iscsi.a 00:02:49.504 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:49.504 CC lib/ftl/base/ftl_base_dev.o 00:02:49.504 CC lib/ftl/base/ftl_base_bdev.o 00:02:49.504 CC lib/ftl/ftl_trace.o 00:02:49.504 SO libspdk_iscsi.so.8.0 00:02:49.763 SYMLINK libspdk_iscsi.so 00:02:49.763 LIB libspdk_ftl.a 00:02:50.025 SO libspdk_ftl.so.9.0 00:02:50.025 LIB libspdk_nvmf.a 00:02:50.025 SO libspdk_nvmf.so.18.0 00:02:50.283 SYMLINK libspdk_ftl.so 00:02:50.283 SYMLINK libspdk_nvmf.so 00:02:50.847 CC module/vfu_device/vfu_virtio.o 00:02:50.847 CC module/env_dpdk/env_dpdk_rpc.o 00:02:50.847 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:50.847 CC module/accel/ioat/accel_ioat.o 00:02:50.847 CC module/sock/posix/posix.o 00:02:50.847 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:50.847 CC module/scheduler/gscheduler/gscheduler.o 00:02:50.847 CC module/accel/error/accel_error.o 00:02:50.847 CC module/blob/bdev/blob_bdev.o 00:02:50.847 CC module/keyring/file/keyring.o 00:02:50.847 LIB libspdk_env_dpdk_rpc.a 00:02:50.847 SO libspdk_env_dpdk_rpc.so.6.0 00:02:50.847 SYMLINK libspdk_env_dpdk_rpc.so 00:02:50.847 CC module/accel/ioat/accel_ioat_rpc.o 00:02:50.847 LIB libspdk_scheduler_dpdk_governor.a 00:02:50.847 CC module/keyring/file/keyring_rpc.o 00:02:50.847 LIB libspdk_scheduler_gscheduler.a 00:02:50.847 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:50.847 LIB libspdk_scheduler_dynamic.a 00:02:50.847 SO libspdk_scheduler_gscheduler.so.4.0 00:02:50.847 CC module/accel/error/accel_error_rpc.o 00:02:51.105 SO libspdk_scheduler_dynamic.so.4.0 00:02:51.105 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:51.105 SYMLINK libspdk_scheduler_gscheduler.so 00:02:51.105 CC module/vfu_device/vfu_virtio_blk.o 00:02:51.105 SYMLINK libspdk_scheduler_dynamic.so 00:02:51.105 CC module/vfu_device/vfu_virtio_scsi.o 00:02:51.105 LIB libspdk_accel_ioat.a 00:02:51.105 LIB libspdk_keyring_file.a 00:02:51.105 SO libspdk_accel_ioat.so.6.0 00:02:51.105 LIB libspdk_blob_bdev.a 00:02:51.105 SO libspdk_keyring_file.so.1.0 00:02:51.105 LIB libspdk_accel_error.a 00:02:51.105 SO libspdk_blob_bdev.so.11.0 00:02:51.105 SO libspdk_accel_error.so.2.0 00:02:51.105 SYMLINK libspdk_accel_ioat.so 00:02:51.105 CC module/accel/iaa/accel_iaa.o 00:02:51.105 SYMLINK libspdk_keyring_file.so 00:02:51.105 CC module/accel/iaa/accel_iaa_rpc.o 00:02:51.105 CC module/accel/dsa/accel_dsa.o 00:02:51.105 CC module/accel/dsa/accel_dsa_rpc.o 00:02:51.105 SYMLINK libspdk_accel_error.so 00:02:51.105 CC module/vfu_device/vfu_virtio_rpc.o 00:02:51.105 SYMLINK libspdk_blob_bdev.so 00:02:51.362 LIB libspdk_accel_iaa.a 00:02:51.362 LIB libspdk_vfu_device.a 00:02:51.362 SO libspdk_accel_iaa.so.3.0 00:02:51.362 LIB libspdk_accel_dsa.a 00:02:51.362 CC module/bdev/delay/vbdev_delay.o 00:02:51.362 SO libspdk_accel_dsa.so.5.0 00:02:51.362 SO libspdk_vfu_device.so.3.0 00:02:51.619 LIB libspdk_sock_posix.a 00:02:51.619 SYMLINK libspdk_accel_iaa.so 00:02:51.619 CC module/bdev/error/vbdev_error.o 00:02:51.619 CC module/bdev/error/vbdev_error_rpc.o 00:02:51.619 CC module/bdev/gpt/gpt.o 00:02:51.619 CC module/bdev/lvol/vbdev_lvol.o 00:02:51.619 SO libspdk_sock_posix.so.6.0 00:02:51.619 SYMLINK libspdk_accel_dsa.so 00:02:51.619 SYMLINK libspdk_vfu_device.so 00:02:51.619 CC module/bdev/malloc/bdev_malloc.o 00:02:51.619 CC module/blobfs/bdev/blobfs_bdev.o 00:02:51.619 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:51.619 SYMLINK libspdk_sock_posix.so 00:02:51.619 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:51.619 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:51.619 CC module/bdev/gpt/vbdev_gpt.o 00:02:51.876 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:51.876 LIB libspdk_bdev_error.a 00:02:51.876 CC module/bdev/null/bdev_null.o 00:02:51.876 CC module/bdev/null/bdev_null_rpc.o 00:02:51.877 SO libspdk_bdev_error.so.6.0 00:02:51.877 LIB libspdk_blobfs_bdev.a 00:02:51.877 SYMLINK libspdk_bdev_error.so 00:02:51.877 LIB libspdk_bdev_delay.a 00:02:51.877 SO libspdk_blobfs_bdev.so.6.0 00:02:51.877 SO libspdk_bdev_delay.so.6.0 00:02:51.877 LIB libspdk_bdev_malloc.a 00:02:51.877 SYMLINK libspdk_blobfs_bdev.so 00:02:52.134 SYMLINK libspdk_bdev_delay.so 00:02:52.134 SO libspdk_bdev_malloc.so.6.0 00:02:52.134 CC module/bdev/nvme/bdev_nvme.o 00:02:52.134 LIB libspdk_bdev_gpt.a 00:02:52.134 CC module/bdev/passthru/vbdev_passthru.o 00:02:52.134 LIB libspdk_bdev_null.a 00:02:52.134 SO libspdk_bdev_gpt.so.6.0 00:02:52.134 SYMLINK libspdk_bdev_malloc.so 00:02:52.134 SO libspdk_bdev_null.so.6.0 00:02:52.134 SYMLINK libspdk_bdev_gpt.so 00:02:52.134 CC module/bdev/raid/bdev_raid.o 00:02:52.134 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:52.134 SYMLINK libspdk_bdev_null.so 00:02:52.134 CC module/bdev/split/vbdev_split.o 00:02:52.134 LIB libspdk_bdev_lvol.a 00:02:52.134 CC module/bdev/aio/bdev_aio.o 00:02:52.134 SO libspdk_bdev_lvol.so.6.0 00:02:52.392 CC module/bdev/ftl/bdev_ftl.o 00:02:52.392 SYMLINK libspdk_bdev_lvol.so 00:02:52.392 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:52.392 CC module/bdev/iscsi/bdev_iscsi.o 00:02:52.392 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:52.392 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:52.392 CC module/bdev/split/vbdev_split_rpc.o 00:02:52.649 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:52.649 LIB libspdk_bdev_passthru.a 00:02:52.649 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:52.649 CC module/bdev/aio/bdev_aio_rpc.o 00:02:52.649 SO libspdk_bdev_passthru.so.6.0 00:02:52.649 LIB libspdk_bdev_split.a 00:02:52.649 SYMLINK libspdk_bdev_passthru.so 00:02:52.649 SO libspdk_bdev_split.so.6.0 00:02:52.649 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:52.649 LIB libspdk_bdev_zone_block.a 00:02:52.649 SYMLINK libspdk_bdev_split.so 00:02:52.649 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:52.649 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:52.649 SO libspdk_bdev_zone_block.so.6.0 00:02:52.649 LIB libspdk_bdev_aio.a 00:02:52.906 SO libspdk_bdev_aio.so.6.0 00:02:52.906 LIB libspdk_bdev_ftl.a 00:02:52.906 SYMLINK libspdk_bdev_zone_block.so 00:02:52.906 CC module/bdev/nvme/nvme_rpc.o 00:02:52.906 SO libspdk_bdev_ftl.so.6.0 00:02:52.906 SYMLINK libspdk_bdev_aio.so 00:02:52.906 CC module/bdev/nvme/bdev_mdns_client.o 00:02:52.906 CC module/bdev/nvme/vbdev_opal.o 00:02:52.906 LIB libspdk_bdev_iscsi.a 00:02:52.906 SYMLINK libspdk_bdev_ftl.so 00:02:52.906 CC module/bdev/raid/bdev_raid_rpc.o 00:02:52.906 CC module/bdev/raid/bdev_raid_sb.o 00:02:52.906 SO libspdk_bdev_iscsi.so.6.0 00:02:52.906 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:52.906 LIB libspdk_bdev_virtio.a 00:02:52.906 SO libspdk_bdev_virtio.so.6.0 00:02:52.906 SYMLINK libspdk_bdev_iscsi.so 00:02:52.906 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:53.164 CC module/bdev/raid/raid0.o 00:02:53.164 SYMLINK libspdk_bdev_virtio.so 00:02:53.164 CC module/bdev/raid/raid1.o 00:02:53.164 CC module/bdev/raid/concat.o 00:02:53.423 LIB libspdk_bdev_raid.a 00:02:53.423 SO libspdk_bdev_raid.so.6.0 00:02:53.423 SYMLINK libspdk_bdev_raid.so 00:02:54.356 LIB libspdk_bdev_nvme.a 00:02:54.356 SO libspdk_bdev_nvme.so.7.0 00:02:54.356 SYMLINK libspdk_bdev_nvme.so 00:02:54.921 CC module/event/subsystems/keyring/keyring.o 00:02:54.922 CC module/event/subsystems/iobuf/iobuf.o 00:02:54.922 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:54.922 CC module/event/subsystems/vmd/vmd.o 00:02:54.922 CC module/event/subsystems/scheduler/scheduler.o 00:02:54.922 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:54.922 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:54.922 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:54.922 CC module/event/subsystems/sock/sock.o 00:02:55.180 LIB libspdk_event_keyring.a 00:02:55.180 LIB libspdk_event_vhost_blk.a 00:02:55.180 LIB libspdk_event_vfu_tgt.a 00:02:55.180 LIB libspdk_event_scheduler.a 00:02:55.180 SO libspdk_event_vhost_blk.so.3.0 00:02:55.180 SO libspdk_event_keyring.so.1.0 00:02:55.180 LIB libspdk_event_sock.a 00:02:55.180 LIB libspdk_event_vmd.a 00:02:55.180 SO libspdk_event_vfu_tgt.so.3.0 00:02:55.180 LIB libspdk_event_iobuf.a 00:02:55.180 SO libspdk_event_sock.so.5.0 00:02:55.180 SO libspdk_event_scheduler.so.4.0 00:02:55.180 SO libspdk_event_vmd.so.6.0 00:02:55.180 SYMLINK libspdk_event_vhost_blk.so 00:02:55.180 SO libspdk_event_iobuf.so.3.0 00:02:55.180 SYMLINK libspdk_event_vfu_tgt.so 00:02:55.180 SYMLINK libspdk_event_keyring.so 00:02:55.180 SYMLINK libspdk_event_sock.so 00:02:55.180 SYMLINK libspdk_event_scheduler.so 00:02:55.180 SYMLINK libspdk_event_vmd.so 00:02:55.180 SYMLINK libspdk_event_iobuf.so 00:02:55.439 CC module/event/subsystems/accel/accel.o 00:02:55.697 LIB libspdk_event_accel.a 00:02:55.697 SO libspdk_event_accel.so.6.0 00:02:55.697 SYMLINK libspdk_event_accel.so 00:02:55.954 CC module/event/subsystems/bdev/bdev.o 00:02:56.212 LIB libspdk_event_bdev.a 00:02:56.212 SO libspdk_event_bdev.so.6.0 00:02:56.212 SYMLINK libspdk_event_bdev.so 00:02:56.470 CC module/event/subsystems/ublk/ublk.o 00:02:56.470 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:56.470 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:56.470 CC module/event/subsystems/scsi/scsi.o 00:02:56.470 CC module/event/subsystems/nbd/nbd.o 00:02:56.729 LIB libspdk_event_ublk.a 00:02:56.729 LIB libspdk_event_nbd.a 00:02:56.729 LIB libspdk_event_scsi.a 00:02:56.729 SO libspdk_event_ublk.so.3.0 00:02:56.729 SO libspdk_event_scsi.so.6.0 00:02:56.729 SO libspdk_event_nbd.so.6.0 00:02:56.729 SYMLINK libspdk_event_ublk.so 00:02:56.729 SYMLINK libspdk_event_nbd.so 00:02:56.729 SYMLINK libspdk_event_scsi.so 00:02:56.729 LIB libspdk_event_nvmf.a 00:02:56.987 SO libspdk_event_nvmf.so.6.0 00:02:56.987 SYMLINK libspdk_event_nvmf.so 00:02:56.987 CC module/event/subsystems/iscsi/iscsi.o 00:02:56.987 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:57.246 LIB libspdk_event_vhost_scsi.a 00:02:57.246 LIB libspdk_event_iscsi.a 00:02:57.246 SO libspdk_event_vhost_scsi.so.3.0 00:02:57.246 SO libspdk_event_iscsi.so.6.0 00:02:57.246 SYMLINK libspdk_event_vhost_scsi.so 00:02:57.246 SYMLINK libspdk_event_iscsi.so 00:02:57.504 SO libspdk.so.6.0 00:02:57.504 SYMLINK libspdk.so 00:02:57.762 CC app/spdk_lspci/spdk_lspci.o 00:02:57.762 CXX app/trace/trace.o 00:02:57.762 CC app/trace_record/trace_record.o 00:02:57.762 CC app/iscsi_tgt/iscsi_tgt.o 00:02:57.762 CC app/nvmf_tgt/nvmf_main.o 00:02:57.762 CC app/spdk_tgt/spdk_tgt.o 00:02:57.762 CC examples/accel/perf/accel_perf.o 00:02:58.020 CC test/accel/dif/dif.o 00:02:58.020 CC test/app/bdev_svc/bdev_svc.o 00:02:58.020 LINK spdk_lspci 00:02:58.020 CC test/bdev/bdevio/bdevio.o 00:02:58.020 LINK nvmf_tgt 00:02:58.020 LINK spdk_trace_record 00:02:58.020 LINK spdk_tgt 00:02:58.020 LINK iscsi_tgt 00:02:58.277 LINK bdev_svc 00:02:58.277 LINK spdk_trace 00:02:58.277 CC app/spdk_nvme_perf/perf.o 00:02:58.277 CC app/spdk_nvme_identify/identify.o 00:02:58.277 LINK dif 00:02:58.277 LINK accel_perf 00:02:58.277 LINK bdevio 00:02:58.277 CC app/spdk_nvme_discover/discovery_aer.o 00:02:58.535 CC app/spdk_top/spdk_top.o 00:02:58.535 CC app/vhost/vhost.o 00:02:58.535 LINK spdk_nvme_discover 00:02:58.535 CC app/spdk_dd/spdk_dd.o 00:02:58.535 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:58.793 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:58.793 CC examples/bdev/hello_world/hello_bdev.o 00:02:58.793 LINK vhost 00:02:58.793 CC test/blobfs/mkfs/mkfs.o 00:02:59.051 CC app/fio/nvme/fio_plugin.o 00:02:59.051 LINK spdk_dd 00:02:59.051 LINK spdk_nvme_perf 00:02:59.051 LINK mkfs 00:02:59.051 LINK nvme_fuzz 00:02:59.051 LINK hello_bdev 00:02:59.051 LINK spdk_nvme_identify 00:02:59.051 CC app/fio/bdev/fio_plugin.o 00:02:59.309 TEST_HEADER include/spdk/accel.h 00:02:59.309 TEST_HEADER include/spdk/accel_module.h 00:02:59.309 TEST_HEADER include/spdk/assert.h 00:02:59.309 TEST_HEADER include/spdk/barrier.h 00:02:59.309 LINK spdk_top 00:02:59.309 TEST_HEADER include/spdk/base64.h 00:02:59.309 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:59.309 TEST_HEADER include/spdk/bdev.h 00:02:59.309 TEST_HEADER include/spdk/bdev_module.h 00:02:59.309 TEST_HEADER include/spdk/bdev_zone.h 00:02:59.309 TEST_HEADER include/spdk/bit_array.h 00:02:59.309 TEST_HEADER include/spdk/bit_pool.h 00:02:59.309 TEST_HEADER include/spdk/blob_bdev.h 00:02:59.309 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:59.309 TEST_HEADER include/spdk/blobfs.h 00:02:59.309 TEST_HEADER include/spdk/blob.h 00:02:59.309 TEST_HEADER include/spdk/conf.h 00:02:59.309 TEST_HEADER include/spdk/config.h 00:02:59.309 TEST_HEADER include/spdk/cpuset.h 00:02:59.309 TEST_HEADER include/spdk/crc16.h 00:02:59.309 TEST_HEADER include/spdk/crc32.h 00:02:59.309 TEST_HEADER include/spdk/crc64.h 00:02:59.309 TEST_HEADER include/spdk/dif.h 00:02:59.309 TEST_HEADER include/spdk/dma.h 00:02:59.309 TEST_HEADER include/spdk/endian.h 00:02:59.309 TEST_HEADER include/spdk/env_dpdk.h 00:02:59.309 TEST_HEADER include/spdk/env.h 00:02:59.309 TEST_HEADER include/spdk/event.h 00:02:59.309 TEST_HEADER include/spdk/fd_group.h 00:02:59.309 TEST_HEADER include/spdk/fd.h 00:02:59.309 TEST_HEADER include/spdk/file.h 00:02:59.309 TEST_HEADER include/spdk/ftl.h 00:02:59.310 TEST_HEADER include/spdk/gpt_spec.h 00:02:59.310 TEST_HEADER include/spdk/hexlify.h 00:02:59.310 TEST_HEADER include/spdk/histogram_data.h 00:02:59.310 TEST_HEADER include/spdk/idxd.h 00:02:59.310 TEST_HEADER include/spdk/idxd_spec.h 00:02:59.310 TEST_HEADER include/spdk/init.h 00:02:59.310 TEST_HEADER include/spdk/ioat.h 00:02:59.310 TEST_HEADER include/spdk/ioat_spec.h 00:02:59.310 TEST_HEADER include/spdk/iscsi_spec.h 00:02:59.310 TEST_HEADER include/spdk/json.h 00:02:59.310 TEST_HEADER include/spdk/jsonrpc.h 00:02:59.310 TEST_HEADER include/spdk/keyring.h 00:02:59.310 TEST_HEADER include/spdk/keyring_module.h 00:02:59.310 TEST_HEADER include/spdk/likely.h 00:02:59.310 TEST_HEADER include/spdk/log.h 00:02:59.310 TEST_HEADER include/spdk/lvol.h 00:02:59.310 TEST_HEADER include/spdk/memory.h 00:02:59.310 CC examples/bdev/bdevperf/bdevperf.o 00:02:59.310 TEST_HEADER include/spdk/mmio.h 00:02:59.310 TEST_HEADER include/spdk/nbd.h 00:02:59.310 TEST_HEADER include/spdk/notify.h 00:02:59.310 TEST_HEADER include/spdk/nvme.h 00:02:59.310 TEST_HEADER include/spdk/nvme_intel.h 00:02:59.310 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:59.310 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:59.310 TEST_HEADER include/spdk/nvme_spec.h 00:02:59.310 TEST_HEADER include/spdk/nvme_zns.h 00:02:59.310 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:59.567 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:59.567 TEST_HEADER include/spdk/nvmf.h 00:02:59.567 TEST_HEADER include/spdk/nvmf_spec.h 00:02:59.567 CC test/dma/test_dma/test_dma.o 00:02:59.567 TEST_HEADER include/spdk/nvmf_transport.h 00:02:59.567 TEST_HEADER include/spdk/opal.h 00:02:59.567 TEST_HEADER include/spdk/opal_spec.h 00:02:59.567 TEST_HEADER include/spdk/pci_ids.h 00:02:59.567 TEST_HEADER include/spdk/pipe.h 00:02:59.567 TEST_HEADER include/spdk/queue.h 00:02:59.567 TEST_HEADER include/spdk/reduce.h 00:02:59.567 TEST_HEADER include/spdk/rpc.h 00:02:59.567 TEST_HEADER include/spdk/scheduler.h 00:02:59.567 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:59.567 TEST_HEADER include/spdk/scsi.h 00:02:59.567 TEST_HEADER include/spdk/scsi_spec.h 00:02:59.567 TEST_HEADER include/spdk/sock.h 00:02:59.567 TEST_HEADER include/spdk/stdinc.h 00:02:59.567 TEST_HEADER include/spdk/string.h 00:02:59.567 TEST_HEADER include/spdk/thread.h 00:02:59.567 CC test/event/event_perf/event_perf.o 00:02:59.567 TEST_HEADER include/spdk/trace.h 00:02:59.567 TEST_HEADER include/spdk/trace_parser.h 00:02:59.567 TEST_HEADER include/spdk/tree.h 00:02:59.567 TEST_HEADER include/spdk/ublk.h 00:02:59.567 TEST_HEADER include/spdk/util.h 00:02:59.567 TEST_HEADER include/spdk/uuid.h 00:02:59.567 TEST_HEADER include/spdk/version.h 00:02:59.567 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:59.567 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:59.567 TEST_HEADER include/spdk/vhost.h 00:02:59.567 TEST_HEADER include/spdk/vmd.h 00:02:59.567 TEST_HEADER include/spdk/xor.h 00:02:59.567 TEST_HEADER include/spdk/zipf.h 00:02:59.567 CXX test/cpp_headers/accel.o 00:02:59.567 CC test/env/mem_callbacks/mem_callbacks.o 00:02:59.567 LINK spdk_nvme 00:02:59.568 LINK event_perf 00:02:59.568 LINK spdk_bdev 00:02:59.825 CXX test/cpp_headers/accel_module.o 00:02:59.825 CC test/lvol/esnap/esnap.o 00:02:59.825 CC test/env/vtophys/vtophys.o 00:02:59.825 LINK test_dma 00:02:59.825 LINK vhost_fuzz 00:02:59.825 CC test/event/reactor/reactor.o 00:02:59.825 CC test/event/reactor_perf/reactor_perf.o 00:03:00.083 LINK vtophys 00:03:00.083 CXX test/cpp_headers/assert.o 00:03:00.083 LINK reactor 00:03:00.083 CXX test/cpp_headers/barrier.o 00:03:00.083 CXX test/cpp_headers/base64.o 00:03:00.083 LINK reactor_perf 00:03:00.083 LINK bdevperf 00:03:00.083 LINK mem_callbacks 00:03:00.340 CC test/event/app_repeat/app_repeat.o 00:03:00.340 CXX test/cpp_headers/bdev.o 00:03:00.340 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.340 CXX test/cpp_headers/bdev_module.o 00:03:00.340 CC test/event/scheduler/scheduler.o 00:03:00.340 LINK iscsi_fuzz 00:03:00.340 CC test/env/memory/memory_ut.o 00:03:00.340 CC test/env/pci/pci_ut.o 00:03:00.340 LINK app_repeat 00:03:00.340 LINK env_dpdk_post_init 00:03:00.596 CXX test/cpp_headers/bdev_zone.o 00:03:00.597 LINK scheduler 00:03:00.597 CC examples/blob/hello_world/hello_blob.o 00:03:00.597 CXX test/cpp_headers/bit_array.o 00:03:00.597 CC test/nvme/aer/aer.o 00:03:00.597 CC test/app/histogram_perf/histogram_perf.o 00:03:00.853 CC examples/blob/cli/blobcli.o 00:03:00.853 CXX test/cpp_headers/bit_pool.o 00:03:00.853 CC test/rpc_client/rpc_client_test.o 00:03:00.853 LINK histogram_perf 00:03:00.853 LINK hello_blob 00:03:00.853 LINK pci_ut 00:03:00.853 LINK aer 00:03:00.853 CXX test/cpp_headers/blob_bdev.o 00:03:00.853 CC test/thread/poller_perf/poller_perf.o 00:03:01.110 LINK rpc_client_test 00:03:01.110 CC test/app/jsoncat/jsoncat.o 00:03:01.110 LINK poller_perf 00:03:01.110 CXX test/cpp_headers/blobfs_bdev.o 00:03:01.110 CC test/nvme/reset/reset.o 00:03:01.110 LINK jsoncat 00:03:01.110 CC test/nvme/sgl/sgl.o 00:03:01.369 CC test/app/stub/stub.o 00:03:01.369 LINK memory_ut 00:03:01.369 CXX test/cpp_headers/blobfs.o 00:03:01.369 LINK blobcli 00:03:01.369 CC examples/ioat/perf/perf.o 00:03:01.369 CXX test/cpp_headers/blob.o 00:03:01.369 LINK stub 00:03:01.369 CC examples/ioat/verify/verify.o 00:03:01.369 LINK reset 00:03:01.628 LINK sgl 00:03:01.628 CC test/nvme/e2edp/nvme_dp.o 00:03:01.628 CXX test/cpp_headers/conf.o 00:03:01.628 LINK ioat_perf 00:03:01.628 CC examples/nvme/hello_world/hello_world.o 00:03:01.628 LINK verify 00:03:01.628 CC test/nvme/overhead/overhead.o 00:03:01.885 CC examples/sock/hello_world/hello_sock.o 00:03:01.885 CXX test/cpp_headers/config.o 00:03:01.885 CC examples/nvme/reconnect/reconnect.o 00:03:01.885 CC test/nvme/err_injection/err_injection.o 00:03:01.885 CXX test/cpp_headers/cpuset.o 00:03:01.885 LINK nvme_dp 00:03:01.885 CXX test/cpp_headers/crc16.o 00:03:01.885 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.885 LINK hello_world 00:03:02.143 CXX test/cpp_headers/crc32.o 00:03:02.143 LINK err_injection 00:03:02.143 LINK hello_sock 00:03:02.143 CXX test/cpp_headers/crc64.o 00:03:02.143 LINK overhead 00:03:02.143 LINK reconnect 00:03:02.143 CC test/nvme/startup/startup.o 00:03:02.143 CC test/nvme/reserve/reserve.o 00:03:02.143 CXX test/cpp_headers/dif.o 00:03:02.400 CC examples/nvme/hotplug/hotplug.o 00:03:02.400 CC examples/nvme/arbitration/arbitration.o 00:03:02.400 CC test/nvme/simple_copy/simple_copy.o 00:03:02.400 CC test/nvme/connect_stress/connect_stress.o 00:03:02.400 LINK startup 00:03:02.400 CXX test/cpp_headers/dma.o 00:03:02.400 CC test/nvme/boot_partition/boot_partition.o 00:03:02.400 LINK nvme_manage 00:03:02.400 LINK reserve 00:03:02.400 LINK connect_stress 00:03:02.400 LINK hotplug 00:03:02.658 CXX test/cpp_headers/endian.o 00:03:02.658 LINK simple_copy 00:03:02.658 LINK boot_partition 00:03:02.658 LINK arbitration 00:03:02.658 CXX test/cpp_headers/env_dpdk.o 00:03:02.658 CC test/nvme/compliance/nvme_compliance.o 00:03:02.658 CXX test/cpp_headers/env.o 00:03:02.658 CXX test/cpp_headers/event.o 00:03:02.658 CC test/nvme/fused_ordering/fused_ordering.o 00:03:02.658 CXX test/cpp_headers/fd_group.o 00:03:02.658 CXX test/cpp_headers/fd.o 00:03:02.915 CXX test/cpp_headers/file.o 00:03:02.915 CXX test/cpp_headers/ftl.o 00:03:02.915 CXX test/cpp_headers/gpt_spec.o 00:03:02.915 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:02.915 LINK fused_ordering 00:03:02.915 CXX test/cpp_headers/hexlify.o 00:03:02.915 CC examples/nvme/abort/abort.o 00:03:02.915 CC examples/vmd/lsvmd/lsvmd.o 00:03:02.915 LINK nvme_compliance 00:03:03.174 CXX test/cpp_headers/histogram_data.o 00:03:03.174 CXX test/cpp_headers/idxd.o 00:03:03.174 CXX test/cpp_headers/idxd_spec.o 00:03:03.174 LINK cmb_copy 00:03:03.174 LINK lsvmd 00:03:03.174 CC examples/vmd/led/led.o 00:03:03.174 CXX test/cpp_headers/init.o 00:03:03.174 CXX test/cpp_headers/ioat.o 00:03:03.174 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:03.174 CC examples/nvmf/nvmf/nvmf.o 00:03:03.174 LINK abort 00:03:03.432 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:03.432 LINK led 00:03:03.432 CXX test/cpp_headers/ioat_spec.o 00:03:03.432 CC examples/util/zipf/zipf.o 00:03:03.432 CC test/nvme/fdp/fdp.o 00:03:03.432 LINK doorbell_aers 00:03:03.432 CC test/nvme/cuse/cuse.o 00:03:03.432 LINK pmr_persistence 00:03:03.689 LINK nvmf 00:03:03.689 CXX test/cpp_headers/iscsi_spec.o 00:03:03.689 LINK zipf 00:03:03.689 CXX test/cpp_headers/json.o 00:03:03.689 CC examples/thread/thread/thread_ex.o 00:03:03.689 CC examples/idxd/perf/perf.o 00:03:03.690 CXX test/cpp_headers/jsonrpc.o 00:03:03.690 LINK fdp 00:03:03.948 CXX test/cpp_headers/keyring.o 00:03:03.948 CXX test/cpp_headers/keyring_module.o 00:03:03.948 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:03.948 LINK thread 00:03:03.948 CXX test/cpp_headers/likely.o 00:03:03.948 CXX test/cpp_headers/log.o 00:03:03.948 CXX test/cpp_headers/lvol.o 00:03:03.948 CXX test/cpp_headers/memory.o 00:03:04.207 LINK idxd_perf 00:03:04.207 LINK interrupt_tgt 00:03:04.207 CXX test/cpp_headers/mmio.o 00:03:04.207 CXX test/cpp_headers/nbd.o 00:03:04.207 CXX test/cpp_headers/notify.o 00:03:04.207 CXX test/cpp_headers/nvme.o 00:03:04.207 CXX test/cpp_headers/nvme_intel.o 00:03:04.207 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.207 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.207 CXX test/cpp_headers/nvme_spec.o 00:03:04.207 CXX test/cpp_headers/nvme_zns.o 00:03:04.465 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.465 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.465 CXX test/cpp_headers/nvmf.o 00:03:04.465 CXX test/cpp_headers/nvmf_spec.o 00:03:04.465 CXX test/cpp_headers/nvmf_transport.o 00:03:04.465 CXX test/cpp_headers/opal.o 00:03:04.465 CXX test/cpp_headers/opal_spec.o 00:03:04.465 LINK esnap 00:03:04.724 CXX test/cpp_headers/pci_ids.o 00:03:04.724 CXX test/cpp_headers/pipe.o 00:03:04.724 CXX test/cpp_headers/queue.o 00:03:04.724 LINK cuse 00:03:04.724 CXX test/cpp_headers/reduce.o 00:03:04.724 CXX test/cpp_headers/rpc.o 00:03:04.724 CXX test/cpp_headers/scheduler.o 00:03:04.724 CXX test/cpp_headers/scsi.o 00:03:04.724 CXX test/cpp_headers/scsi_spec.o 00:03:04.724 CXX test/cpp_headers/sock.o 00:03:04.724 CXX test/cpp_headers/stdinc.o 00:03:04.724 CXX test/cpp_headers/string.o 00:03:04.982 CXX test/cpp_headers/thread.o 00:03:04.982 CXX test/cpp_headers/trace.o 00:03:04.982 CXX test/cpp_headers/trace_parser.o 00:03:04.982 CXX test/cpp_headers/tree.o 00:03:04.982 CXX test/cpp_headers/ublk.o 00:03:04.982 CXX test/cpp_headers/util.o 00:03:04.982 CXX test/cpp_headers/uuid.o 00:03:04.982 CXX test/cpp_headers/version.o 00:03:04.982 CXX test/cpp_headers/vfio_user_pci.o 00:03:04.982 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.239 CXX test/cpp_headers/vhost.o 00:03:05.239 CXX test/cpp_headers/vmd.o 00:03:05.239 CXX test/cpp_headers/xor.o 00:03:05.239 CXX test/cpp_headers/zipf.o 00:03:10.503 00:03:10.503 real 1m9.143s 00:03:10.503 user 7m3.531s 00:03:10.503 sys 1m39.112s 00:03:10.503 18:16:26 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:03:10.503 18:16:26 make -- common/autotest_common.sh@10 -- $ set +x 00:03:10.503 ************************************ 00:03:10.503 END TEST make 00:03:10.503 ************************************ 00:03:10.503 18:16:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.503 18:16:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.503 18:16:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.503 18:16:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.503 18:16:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.503 18:16:26 -- pm/common@44 -- $ pid=5295 00:03:10.503 18:16:26 -- pm/common@50 -- $ kill -TERM 5295 00:03:10.503 18:16:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.503 18:16:26 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.503 18:16:26 -- pm/common@44 -- $ pid=5296 00:03:10.503 18:16:26 -- pm/common@50 -- $ kill -TERM 5296 00:03:10.503 18:16:26 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:10.503 18:16:26 -- nvmf/common.sh@7 -- # uname -s 00:03:10.503 18:16:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.503 18:16:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.503 18:16:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.503 18:16:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.503 18:16:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:10.503 18:16:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:10.503 18:16:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.503 18:16:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:10.503 18:16:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.503 18:16:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:10.503 18:16:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:03:10.503 18:16:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:03:10.503 18:16:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.503 18:16:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:10.503 18:16:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:10.503 18:16:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.503 18:16:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:10.503 18:16:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.503 18:16:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.503 18:16:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.503 18:16:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.503 18:16:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.503 18:16:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.503 18:16:26 -- paths/export.sh@5 -- # export PATH 00:03:10.503 18:16:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.503 18:16:26 -- nvmf/common.sh@47 -- # : 0 00:03:10.503 18:16:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:10.503 18:16:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:10.503 18:16:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:10.503 18:16:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.503 18:16:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.503 18:16:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:10.503 18:16:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:10.503 18:16:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:10.503 18:16:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.503 18:16:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.503 18:16:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.503 18:16:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.503 18:16:26 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:10.503 18:16:26 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.503 18:16:26 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:10.503 18:16:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.503 18:16:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.503 18:16:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.503 18:16:26 -- spdk/autotest.sh@48 -- # udevadm_pid=54583 00:03:10.503 18:16:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.503 18:16:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.503 18:16:26 -- pm/common@17 -- # local monitor 00:03:10.503 18:16:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.503 18:16:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.503 18:16:26 -- pm/common@25 -- # sleep 1 00:03:10.503 18:16:26 -- pm/common@21 -- # date +%s 00:03:10.503 18:16:26 -- pm/common@21 -- # date +%s 00:03:10.503 18:16:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715624186 00:03:10.503 18:16:26 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1715624186 00:03:10.503 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715624186_collect-vmstat.pm.log 00:03:10.503 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1715624186_collect-cpu-load.pm.log 00:03:11.876 18:16:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.876 18:16:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.876 18:16:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:11.877 18:16:27 -- common/autotest_common.sh@10 -- # set +x 00:03:11.877 18:16:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.877 18:16:27 -- common/autotest_common.sh@744 -- # xtrace_disable 00:03:11.877 18:16:27 -- common/autotest_common.sh@10 -- # set +x 00:03:11.877 18:16:27 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:11.877 18:16:27 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:11.877 18:16:27 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:11.877 18:16:27 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:11.877 18:16:27 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:11.877 18:16:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.877 18:16:27 -- common/autotest_common.sh@1451 -- # uname 00:03:11.877 18:16:27 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:03:11.877 18:16:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.877 18:16:27 -- common/autotest_common.sh@1471 -- # uname 00:03:11.877 18:16:27 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:03:11.877 18:16:27 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:11.877 18:16:27 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:11.877 18:16:27 -- spdk/autotest.sh@72 -- # hash lcov 00:03:11.877 18:16:27 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:11.877 18:16:27 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:11.877 --rc lcov_branch_coverage=1 00:03:11.877 --rc lcov_function_coverage=1 00:03:11.877 --rc genhtml_branch_coverage=1 00:03:11.877 --rc genhtml_function_coverage=1 00:03:11.877 --rc genhtml_legend=1 00:03:11.877 --rc geninfo_all_blocks=1 00:03:11.877 ' 00:03:11.877 18:16:27 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:11.877 --rc lcov_branch_coverage=1 00:03:11.877 --rc lcov_function_coverage=1 00:03:11.877 --rc genhtml_branch_coverage=1 00:03:11.877 --rc genhtml_function_coverage=1 00:03:11.877 --rc genhtml_legend=1 00:03:11.877 --rc geninfo_all_blocks=1 00:03:11.877 ' 00:03:11.877 18:16:27 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:11.877 --rc lcov_branch_coverage=1 00:03:11.877 --rc lcov_function_coverage=1 00:03:11.877 --rc genhtml_branch_coverage=1 00:03:11.877 --rc genhtml_function_coverage=1 00:03:11.877 --rc genhtml_legend=1 00:03:11.877 --rc geninfo_all_blocks=1 00:03:11.877 --no-external' 00:03:11.877 18:16:27 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:11.877 --rc lcov_branch_coverage=1 00:03:11.877 --rc lcov_function_coverage=1 00:03:11.877 --rc genhtml_branch_coverage=1 00:03:11.877 --rc genhtml_function_coverage=1 00:03:11.877 --rc genhtml_legend=1 00:03:11.877 --rc geninfo_all_blocks=1 00:03:11.877 --no-external' 00:03:11.877 18:16:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:11.877 lcov: LCOV version 1.14 00:03:11.877 18:16:27 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:21.873 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:21.873 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:21.873 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:21.873 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:21.873 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:21.873 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:27.139 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:27.140 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:42.050 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:42.050 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:03:42.050 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:42.050 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:03:42.050 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:42.050 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:03:42.050 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:42.050 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:03:42.050 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:42.050 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:03:42.050 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:03:42.051 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:42.051 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:03:42.052 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:42.052 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:03:43.951 18:16:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:43.951 18:16:59 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:43.951 18:16:59 -- common/autotest_common.sh@10 -- # set +x 00:03:43.951 18:16:59 -- spdk/autotest.sh@91 -- # rm -f 00:03:43.951 18:16:59 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:44.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:44.775 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:44.775 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:44.775 18:17:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:44.775 18:17:00 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:44.775 18:17:00 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:44.775 18:17:00 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:44.775 18:17:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:44.775 18:17:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:44.775 18:17:00 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:44.775 18:17:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.775 18:17:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:44.775 18:17:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:44.775 18:17:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:44.775 18:17:00 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:44.775 18:17:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:44.775 18:17:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:44.775 18:17:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:44.775 18:17:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:03:44.775 18:17:00 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:03:44.775 18:17:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:44.775 18:17:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:44.775 18:17:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:44.775 18:17:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:03:44.775 18:17:00 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:03:44.775 18:17:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:44.775 18:17:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:44.775 18:17:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:44.775 18:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.775 18:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.775 18:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:44.775 18:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:44.775 18:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:44.775 No valid GPT data, bailing 00:03:44.775 18:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.775 18:17:00 -- scripts/common.sh@391 -- # pt= 00:03:44.775 18:17:00 -- scripts/common.sh@392 -- # return 1 00:03:44.775 18:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:44.775 1+0 records in 00:03:44.775 1+0 records out 00:03:44.775 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494762 s, 212 MB/s 00:03:44.775 18:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.775 18:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.775 18:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:44.775 18:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:44.775 18:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:44.775 No valid GPT data, bailing 00:03:44.775 18:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:44.775 18:17:00 -- scripts/common.sh@391 -- # pt= 00:03:44.775 18:17:00 -- scripts/common.sh@392 -- # return 1 00:03:44.775 18:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:44.776 1+0 records in 00:03:44.776 1+0 records out 00:03:44.776 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00441408 s, 238 MB/s 00:03:44.776 18:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:44.776 18:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:44.776 18:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:03:44.776 18:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:03:44.776 18:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:45.034 No valid GPT data, bailing 00:03:45.034 18:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:45.034 18:17:00 -- scripts/common.sh@391 -- # pt= 00:03:45.034 18:17:00 -- scripts/common.sh@392 -- # return 1 00:03:45.034 18:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:45.034 1+0 records in 00:03:45.034 1+0 records out 00:03:45.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467421 s, 224 MB/s 00:03:45.034 18:17:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:45.034 18:17:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:45.034 18:17:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:03:45.034 18:17:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:03:45.034 18:17:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:45.034 No valid GPT data, bailing 00:03:45.034 18:17:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:45.034 18:17:00 -- scripts/common.sh@391 -- # pt= 00:03:45.034 18:17:00 -- scripts/common.sh@392 -- # return 1 00:03:45.034 18:17:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:45.034 1+0 records in 00:03:45.034 1+0 records out 00:03:45.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00543053 s, 193 MB/s 00:03:45.034 18:17:00 -- spdk/autotest.sh@118 -- # sync 00:03:45.034 18:17:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:45.034 18:17:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:45.034 18:17:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:46.934 18:17:02 -- spdk/autotest.sh@124 -- # uname -s 00:03:46.934 18:17:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:46.934 18:17:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:46.934 18:17:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:46.934 18:17:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.934 18:17:02 -- common/autotest_common.sh@10 -- # set +x 00:03:46.934 ************************************ 00:03:46.934 START TEST setup.sh 00:03:46.934 ************************************ 00:03:46.934 18:17:02 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:46.934 * Looking for test storage... 00:03:46.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:46.934 18:17:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:46.934 18:17:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:46.934 18:17:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:46.934 18:17:02 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:46.934 18:17:02 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:46.934 18:17:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.934 ************************************ 00:03:46.934 START TEST acl 00:03:46.934 ************************************ 00:03:46.934 18:17:02 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:47.191 * Looking for test storage... 00:03:47.191 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:47.191 18:17:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:47.191 18:17:02 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:47.191 18:17:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:47.191 18:17:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:47.191 18:17:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:47.191 18:17:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:47.191 18:17:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:47.191 18:17:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:47.191 18:17:02 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:47.757 18:17:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:47.757 18:17:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:47.757 18:17:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.757 18:17:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:47.757 18:17:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.757 18:17:03 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.322 Hugepages 00:03:48.322 node hugesize free / total 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.322 00:03:48.322 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.322 18:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:48.581 18:17:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.581 18:17:04 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:48.581 18:17:04 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:48.581 18:17:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.581 ************************************ 00:03:48.581 START TEST denied 00:03:48.581 ************************************ 00:03:48.581 18:17:04 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:03:48.581 18:17:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:03:48.581 18:17:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:03:48.581 18:17:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:48.581 18:17:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.581 18:17:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:49.515 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.515 18:17:05 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:50.079 ************************************ 00:03:50.079 END TEST denied 00:03:50.079 ************************************ 00:03:50.079 00:03:50.079 real 0m1.440s 00:03:50.079 user 0m0.565s 00:03:50.079 sys 0m0.800s 00:03:50.079 18:17:05 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:50.079 18:17:05 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:50.079 18:17:05 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:50.079 18:17:05 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:50.079 18:17:05 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:50.079 18:17:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:50.079 ************************************ 00:03:50.079 START TEST allowed 00:03:50.079 ************************************ 00:03:50.079 18:17:05 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:03:50.079 18:17:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:03:50.079 18:17:05 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:50.079 18:17:05 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:03:50.079 18:17:05 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.079 18:17:05 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:51.013 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:51.013 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:03:51.013 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:51.013 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:51.013 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:03:51.013 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:03:51.014 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:51.014 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:51.014 18:17:06 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:51.014 18:17:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:51.014 18:17:06 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.579 00:03:51.579 real 0m1.507s 00:03:51.579 user 0m0.659s 00:03:51.579 sys 0m0.832s 00:03:51.579 18:17:07 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:51.579 18:17:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:51.579 ************************************ 00:03:51.579 END TEST allowed 00:03:51.579 ************************************ 00:03:51.579 ************************************ 00:03:51.579 END TEST acl 00:03:51.579 ************************************ 00:03:51.579 00:03:51.579 real 0m4.717s 00:03:51.579 user 0m2.056s 00:03:51.579 sys 0m2.581s 00:03:51.579 18:17:07 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:51.579 18:17:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:51.838 18:17:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:51.838 18:17:07 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:51.838 18:17:07 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:51.838 18:17:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:51.838 ************************************ 00:03:51.838 START TEST hugepages 00:03:51.838 ************************************ 00:03:51.838 18:17:07 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:03:51.838 * Looking for test storage... 00:03:51.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 5425308 kB' 'MemAvailable: 7373208 kB' 'Buffers: 2436 kB' 'Cached: 2157596 kB' 'SwapCached: 0 kB' 'Active: 872612 kB' 'Inactive: 1390340 kB' 'Active(anon): 113408 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390340 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 328 kB' 'Writeback: 0 kB' 'AnonPages: 104524 kB' 'Mapped: 48696 kB' 'Shmem: 10488 kB' 'KReclaimable: 71036 kB' 'Slab: 145612 kB' 'SReclaimable: 71036 kB' 'SUnreclaim: 74576 kB' 'KernelStack: 6348 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412432 kB' 'Committed_AS: 332552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.838 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:51.839 18:17:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:51.839 18:17:07 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:51.839 18:17:07 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:51.839 18:17:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:51.839 ************************************ 00:03:51.839 START TEST default_setup 00:03:51.839 ************************************ 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:51.839 18:17:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:52.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:52.676 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.676 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7471032 kB' 'MemAvailable: 9418764 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889556 kB' 'Inactive: 1390348 kB' 'Active(anon): 130352 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390348 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 148 kB' 'Writeback: 0 kB' 'AnonPages: 121428 kB' 'Mapped: 48868 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145264 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74576 kB' 'KernelStack: 6368 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.676 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.677 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7470056 kB' 'MemAvailable: 9417796 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889180 kB' 'Inactive: 1390356 kB' 'Active(anon): 129976 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121132 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145268 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74580 kB' 'KernelStack: 6336 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.678 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.679 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.680 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.681 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7470056 kB' 'MemAvailable: 9417796 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889104 kB' 'Inactive: 1390356 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121056 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145260 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74572 kB' 'KernelStack: 6336 kB' 'PageTables: 4148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.943 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.944 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.945 nr_hugepages=1024 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.945 resv_hugepages=0 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.945 surplus_hugepages=0 00:03:52.945 anon_hugepages=0 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7470056 kB' 'MemAvailable: 9417796 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889104 kB' 'Inactive: 1390356 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'AnonPages: 121008 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145260 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74572 kB' 'KernelStack: 6320 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.945 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.946 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7469804 kB' 'MemUsed: 4772156 kB' 'SwapCached: 0 kB' 'Active: 889064 kB' 'Inactive: 1390356 kB' 'Active(anon): 129860 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 212 kB' 'Writeback: 0 kB' 'FilePages: 2160024 kB' 'Mapped: 48696 kB' 'AnonPages: 120968 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4144 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70688 kB' 'Slab: 145260 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.947 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:52.948 node0=1024 expecting 1024 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.948 00:03:52.948 real 0m1.000s 00:03:52.948 user 0m0.443s 00:03:52.948 sys 0m0.487s 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:52.948 ************************************ 00:03:52.948 END TEST default_setup 00:03:52.948 ************************************ 00:03:52.948 18:17:08 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:52.948 18:17:08 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:52.948 18:17:08 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:52.948 18:17:08 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:52.948 18:17:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.948 ************************************ 00:03:52.948 START TEST per_node_1G_alloc 00:03:52.948 ************************************ 00:03:52.948 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:03:52.948 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:52.948 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:03:52.948 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:52.948 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:52.948 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.949 18:17:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:53.207 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.207 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.207 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8528380 kB' 'MemAvailable: 10476120 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 888976 kB' 'Inactive: 1390356 kB' 'Active(anon): 129772 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 120880 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145192 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74504 kB' 'KernelStack: 6324 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.470 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8528380 kB' 'MemAvailable: 10476120 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889236 kB' 'Inactive: 1390356 kB' 'Active(anon): 130032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121140 kB' 'Mapped: 48784 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145192 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74504 kB' 'KernelStack: 6292 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.471 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.472 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8528128 kB' 'MemAvailable: 10475868 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 888892 kB' 'Inactive: 1390356 kB' 'Active(anon): 129688 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 120844 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145192 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74504 kB' 'KernelStack: 6352 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.473 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.474 nr_hugepages=512 00:03:53.474 resv_hugepages=0 00:03:53.474 surplus_hugepages=0 00:03:53.474 anon_hugepages=0 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.474 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8528128 kB' 'MemAvailable: 10475868 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889132 kB' 'Inactive: 1390356 kB' 'Active(anon): 129928 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'AnonPages: 121084 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145192 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74504 kB' 'KernelStack: 6352 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.475 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8528380 kB' 'MemUsed: 3713580 kB' 'SwapCached: 0 kB' 'Active: 889104 kB' 'Inactive: 1390352 kB' 'Active(anon): 129900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390352 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 220 kB' 'Writeback: 0 kB' 'FilePages: 2160020 kB' 'Mapped: 48660 kB' 'AnonPages: 120984 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70688 kB' 'Slab: 145188 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74500 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.476 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:53.477 node0=512 expecting 512 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:53.477 00:03:53.477 real 0m0.544s 00:03:53.477 user 0m0.287s 00:03:53.477 sys 0m0.273s 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:53.477 18:17:09 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:53.477 ************************************ 00:03:53.477 END TEST per_node_1G_alloc 00:03:53.477 ************************************ 00:03:53.477 18:17:09 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:53.477 18:17:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:53.477 18:17:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:53.477 18:17:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:53.477 ************************************ 00:03:53.477 START TEST even_2G_alloc 00:03:53.477 ************************************ 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:53.477 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.051 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.051 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.051 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7484792 kB' 'MemAvailable: 9432532 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889436 kB' 'Inactive: 1390356 kB' 'Active(anon): 130232 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121344 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145156 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74468 kB' 'KernelStack: 6276 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 351220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.051 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.052 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7484792 kB' 'MemAvailable: 9432532 kB' 'Buffers: 2436 kB' 'Cached: 2157588 kB' 'SwapCached: 0 kB' 'Active: 889372 kB' 'Inactive: 1390356 kB' 'Active(anon): 130168 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390356 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 121320 kB' 'Mapped: 49064 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145152 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74464 kB' 'KernelStack: 6344 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.053 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.054 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7485288 kB' 'MemAvailable: 9433032 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889236 kB' 'Inactive: 1390360 kB' 'Active(anon): 130032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121176 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'KernelStack: 6328 kB' 'PageTables: 4004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.055 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.056 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.057 nr_hugepages=1024 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.057 resv_hugepages=0 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.057 surplus_hugepages=0 00:03:54.057 anon_hugepages=0 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.057 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7485288 kB' 'MemAvailable: 9433032 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889156 kB' 'Inactive: 1390360 kB' 'Active(anon): 129952 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'AnonPages: 121100 kB' 'Mapped: 48804 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'KernelStack: 6312 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54676 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.058 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7485036 kB' 'MemUsed: 4756924 kB' 'SwapCached: 0 kB' 'Active: 889148 kB' 'Inactive: 1390360 kB' 'Active(anon): 129944 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 228 kB' 'Writeback: 0 kB' 'FilePages: 2160028 kB' 'Mapped: 48696 kB' 'AnonPages: 121136 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.059 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.060 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.061 node0=1024 expecting 1024 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:54.061 00:03:54.061 real 0m0.541s 00:03:54.061 user 0m0.257s 00:03:54.061 sys 0m0.314s 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.061 18:17:09 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.061 ************************************ 00:03:54.061 END TEST even_2G_alloc 00:03:54.061 ************************************ 00:03:54.061 18:17:09 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:54.061 18:17:09 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:54.061 18:17:09 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:54.061 18:17:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.061 ************************************ 00:03:54.061 START TEST odd_alloc 00:03:54.061 ************************************ 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.061 18:17:09 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:54.634 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:54.634 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.634 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7494148 kB' 'MemAvailable: 9441892 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889412 kB' 'Inactive: 1390360 kB' 'Active(anon): 130208 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121364 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145112 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6292 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.634 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7494148 kB' 'MemAvailable: 9441892 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889412 kB' 'Inactive: 1390360 kB' 'Active(anon): 130208 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121364 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145112 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74424 kB' 'KernelStack: 6360 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.635 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.636 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7493896 kB' 'MemAvailable: 9441640 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889276 kB' 'Inactive: 1390360 kB' 'Active(anon): 130072 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121288 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'KernelStack: 6340 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.637 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.638 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.639 nr_hugepages=1025 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:54.639 resv_hugepages=0 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.639 surplus_hugepages=0 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.639 anon_hugepages=0 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7493896 kB' 'MemAvailable: 9441640 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889240 kB' 'Inactive: 1390360 kB' 'Active(anon): 130036 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121252 kB' 'Mapped: 48896 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145136 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74448 kB' 'KernelStack: 6324 kB' 'PageTables: 4020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459984 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.639 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.640 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.641 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7493896 kB' 'MemUsed: 4748064 kB' 'SwapCached: 0 kB' 'Active: 889184 kB' 'Inactive: 1390360 kB' 'Active(anon): 129980 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2160028 kB' 'Mapped: 48896 kB' 'AnonPages: 121208 kB' 'Shmem: 10464 kB' 'KernelStack: 6308 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70688 kB' 'Slab: 145136 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.642 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.643 node0=1025 expecting 1025 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:03:54.643 00:03:54.643 real 0m0.519s 00:03:54.643 user 0m0.268s 00:03:54.643 sys 0m0.285s 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:54.643 18:17:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.643 ************************************ 00:03:54.643 END TEST odd_alloc 00:03:54.643 ************************************ 00:03:54.643 18:17:10 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:54.643 18:17:10 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:54.643 18:17:10 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:54.643 18:17:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.643 ************************************ 00:03:54.644 START TEST custom_alloc 00:03:54.644 ************************************ 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.644 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.216 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.216 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.216 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8546800 kB' 'MemAvailable: 10494544 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 888976 kB' 'Inactive: 1390360 kB' 'Active(anon): 129772 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 121188 kB' 'Mapped: 48788 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145184 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74496 kB' 'KernelStack: 6404 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.216 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.217 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8546636 kB' 'MemAvailable: 10494380 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889092 kB' 'Inactive: 1390360 kB' 'Active(anon): 129888 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121076 kB' 'Mapped: 48756 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145168 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74480 kB' 'KernelStack: 6388 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54692 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.218 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8546636 kB' 'MemAvailable: 10494380 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889168 kB' 'Inactive: 1390360 kB' 'Active(anon): 129964 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121144 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145192 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74504 kB' 'KernelStack: 6352 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.219 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.220 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:03:55.221 nr_hugepages=512 00:03:55.221 resv_hugepages=0 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.221 surplus_hugepages=0 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.221 anon_hugepages=0 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.221 18:17:10 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.221 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8546636 kB' 'MemAvailable: 10494380 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889064 kB' 'Inactive: 1390360 kB' 'Active(anon): 129860 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'AnonPages: 121000 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145192 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74504 kB' 'KernelStack: 6320 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985296 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.222 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.223 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 8547240 kB' 'MemUsed: 3694720 kB' 'SwapCached: 0 kB' 'Active: 889092 kB' 'Inactive: 1390360 kB' 'Active(anon): 129888 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 244 kB' 'Writeback: 0 kB' 'FilePages: 2160028 kB' 'Mapped: 48696 kB' 'AnonPages: 121056 kB' 'Shmem: 10464 kB' 'KernelStack: 6320 kB' 'PageTables: 4092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70688 kB' 'Slab: 145168 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.224 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.225 node0=512 expecting 512 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:55.225 00:03:55.225 real 0m0.530s 00:03:55.225 user 0m0.275s 00:03:55.225 sys 0m0.288s 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:55.225 18:17:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.225 ************************************ 00:03:55.225 END TEST custom_alloc 00:03:55.225 ************************************ 00:03:55.225 18:17:11 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:55.225 18:17:11 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:55.225 18:17:11 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:55.225 18:17:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.225 ************************************ 00:03:55.225 START TEST no_shrink_alloc 00:03:55.225 ************************************ 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.225 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:55.482 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.749 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.749 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7554696 kB' 'MemAvailable: 9502440 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889588 kB' 'Inactive: 1390360 kB' 'Active(anon): 130384 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121284 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'KernelStack: 6324 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54740 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.749 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.750 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7554696 kB' 'MemAvailable: 9502440 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889532 kB' 'Inactive: 1390360 kB' 'Active(anon): 130328 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 121460 kB' 'Mapped: 48832 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145144 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74456 kB' 'KernelStack: 6324 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.751 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.752 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7554696 kB' 'MemAvailable: 9502440 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889196 kB' 'Inactive: 1390360 kB' 'Active(anon): 129992 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 121100 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'KernelStack: 6336 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.753 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.754 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.755 nr_hugepages=1024 00:03:55.755 resv_hugepages=0 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.755 surplus_hugepages=0 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.755 anon_hugepages=0 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7554696 kB' 'MemAvailable: 9502440 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889256 kB' 'Inactive: 1390360 kB' 'Active(anon): 130052 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'AnonPages: 121156 kB' 'Mapped: 48696 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'KernelStack: 6336 kB' 'PageTables: 4140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.755 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.756 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7554696 kB' 'MemUsed: 4687264 kB' 'SwapCached: 0 kB' 'Active: 888948 kB' 'Inactive: 1390360 kB' 'Active(anon): 129744 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 252 kB' 'Writeback: 0 kB' 'FilePages: 2160028 kB' 'Mapped: 48696 kB' 'AnonPages: 120888 kB' 'Shmem: 10464 kB' 'KernelStack: 6352 kB' 'PageTables: 4196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70688 kB' 'Slab: 145140 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.757 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.758 node0=1024 expecting 1024 00:03:55.758 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:55.759 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:55.759 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:55.759 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:55.759 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:55.759 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.759 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:56.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:56.016 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.016 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:03:56.281 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7552604 kB' 'MemAvailable: 9500348 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889984 kB' 'Inactive: 1390360 kB' 'Active(anon): 130780 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 256 kB' 'Writeback: 0 kB' 'AnonPages: 122296 kB' 'Mapped: 48880 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145160 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74472 kB' 'KernelStack: 6436 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54756 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.281 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7553584 kB' 'MemAvailable: 9501328 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889476 kB' 'Inactive: 1390360 kB' 'Active(anon): 130272 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121432 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145172 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74484 kB' 'KernelStack: 6340 kB' 'PageTables: 4228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:56.282 18:17:11 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.282 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.283 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7553472 kB' 'MemAvailable: 9501216 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 889272 kB' 'Inactive: 1390360 kB' 'Active(anon): 130068 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 121220 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145176 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6352 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54724 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.284 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.285 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:56.286 nr_hugepages=1024 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.286 resv_hugepages=0 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.286 surplus_hugepages=0 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.286 anon_hugepages=0 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7553472 kB' 'MemAvailable: 9501216 kB' 'Buffers: 2436 kB' 'Cached: 2157592 kB' 'SwapCached: 0 kB' 'Active: 888924 kB' 'Inactive: 1390360 kB' 'Active(anon): 129720 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 120852 kB' 'Mapped: 48700 kB' 'Shmem: 10464 kB' 'KReclaimable: 70688 kB' 'Slab: 145176 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74488 kB' 'KernelStack: 6336 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461008 kB' 'Committed_AS: 349408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54708 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 184172 kB' 'DirectMap2M: 5058560 kB' 'DirectMap1G: 9437184 kB' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.286 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.287 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241960 kB' 'MemFree: 7553220 kB' 'MemUsed: 4688740 kB' 'SwapCached: 0 kB' 'Active: 888908 kB' 'Inactive: 1390360 kB' 'Active(anon): 129704 kB' 'Inactive(anon): 0 kB' 'Active(file): 759204 kB' 'Inactive(file): 1390360 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'FilePages: 2160028 kB' 'Mapped: 48700 kB' 'AnonPages: 121100 kB' 'Shmem: 10464 kB' 'KernelStack: 6336 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 70688 kB' 'Slab: 145172 kB' 'SReclaimable: 70688 kB' 'SUnreclaim: 74484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.288 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.289 node0=1024 expecting 1024 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.289 18:17:12 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.289 00:03:56.290 real 0m1.014s 00:03:56.290 user 0m0.526s 00:03:56.290 sys 0m0.556s 00:03:56.290 18:17:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.290 18:17:12 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.290 ************************************ 00:03:56.290 END TEST no_shrink_alloc 00:03:56.290 ************************************ 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:56.290 18:17:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:56.290 00:03:56.290 real 0m4.602s 00:03:56.290 user 0m2.228s 00:03:56.290 sys 0m2.457s 00:03:56.290 18:17:12 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.290 18:17:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.290 ************************************ 00:03:56.290 END TEST hugepages 00:03:56.290 ************************************ 00:03:56.290 18:17:12 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:56.290 18:17:12 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.290 18:17:12 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.290 18:17:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.290 ************************************ 00:03:56.290 START TEST driver 00:03:56.290 ************************************ 00:03:56.290 18:17:12 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:03:56.549 * Looking for test storage... 00:03:56.549 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.549 18:17:12 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:56.549 18:17:12 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.549 18:17:12 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.117 18:17:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:57.117 18:17:12 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:57.117 18:17:12 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.117 18:17:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:57.117 ************************************ 00:03:57.117 START TEST guess_driver 00:03:57.117 ************************************ 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:03:57.117 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:57.117 Looking for driver=uio_pci_generic 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.117 18:17:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:57.683 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:03:57.683 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:03:57.683 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:57.941 18:17:13 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:58.507 00:03:58.507 real 0m1.479s 00:03:58.507 user 0m0.540s 00:03:58.507 sys 0m0.907s 00:03:58.507 18:17:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:58.507 18:17:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.507 ************************************ 00:03:58.507 END TEST guess_driver 00:03:58.507 ************************************ 00:03:58.507 ************************************ 00:03:58.507 END TEST driver 00:03:58.507 ************************************ 00:03:58.507 00:03:58.507 real 0m2.154s 00:03:58.507 user 0m0.776s 00:03:58.507 sys 0m1.393s 00:03:58.507 18:17:14 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:58.507 18:17:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:58.507 18:17:14 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:58.507 18:17:14 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:58.507 18:17:14 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:58.507 18:17:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:58.507 ************************************ 00:03:58.507 START TEST devices 00:03:58.507 ************************************ 00:03:58.507 18:17:14 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:03:58.765 * Looking for test storage... 00:03:58.765 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:58.765 18:17:14 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:58.765 18:17:14 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:58.765 18:17:14 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.765 18:17:14 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:59.332 18:17:15 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:59.332 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:59.332 18:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:59.332 18:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:03:59.332 No valid GPT data, bailing 00:03:59.332 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:03:59.592 No valid GPT data, bailing 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n2 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:03:59.592 No valid GPT data, bailing 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n3 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:03:59.592 No valid GPT data, bailing 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:59.592 18:17:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:59.592 18:17:15 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:59.592 18:17:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:59.592 18:17:15 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.592 18:17:15 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.592 18:17:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:59.592 ************************************ 00:03:59.592 START TEST nvme_mount 00:03:59.592 ************************************ 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:59.592 18:17:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:00.967 Creating new GPT entries in memory. 00:04:00.967 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:00.967 other utilities. 00:04:00.967 18:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:00.967 18:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:00.967 18:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:00.967 18:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:00.967 18:17:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:01.900 Creating new GPT entries in memory. 00:04:01.900 The operation has completed successfully. 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 58783 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:01.900 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.158 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.158 18:17:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.158 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.158 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.158 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.158 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:02.159 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.159 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.159 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.159 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:02.159 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.416 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.416 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:02.416 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:02.416 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:02.416 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:02.416 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:02.680 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:02.680 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:02.680 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:02.680 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:02.680 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:02.680 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:02.680 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.680 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:02.680 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:02.680 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.680 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.681 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:02.952 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.210 18:17:18 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:03.469 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:03.727 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:03.727 00:04:03.727 real 0m3.932s 00:04:03.727 user 0m0.659s 00:04:03.727 sys 0m1.012s 00:04:03.727 ************************************ 00:04:03.727 END TEST nvme_mount 00:04:03.727 ************************************ 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.727 18:17:19 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 18:17:19 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:03.727 18:17:19 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:03.727 18:17:19 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:03.727 18:17:19 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:03.727 ************************************ 00:04:03.727 START TEST dm_mount 00:04:03.727 ************************************ 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:03.727 18:17:19 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:04.665 Creating new GPT entries in memory. 00:04:04.665 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:04.665 other utilities. 00:04:04.665 18:17:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:04.665 18:17:20 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:04.665 18:17:20 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:04.665 18:17:20 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:04.665 18:17:20 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:04:06.039 Creating new GPT entries in memory. 00:04:06.039 The operation has completed successfully. 00:04:06.039 18:17:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.039 18:17:21 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.039 18:17:21 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:06.039 18:17:21 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:06.039 18:17:21 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:04:06.974 The operation has completed successfully. 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 59216 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:06.974 18:17:22 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.233 18:17:23 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:07.491 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.491 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:07.491 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:07.491 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.491 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.491 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:07.749 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:08.006 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:08.006 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.006 18:17:23 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:08.006 00:04:08.006 real 0m4.202s 00:04:08.006 user 0m0.477s 00:04:08.006 sys 0m0.689s 00:04:08.006 18:17:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.006 18:17:23 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:08.006 ************************************ 00:04:08.006 END TEST dm_mount 00:04:08.006 ************************************ 00:04:08.006 18:17:23 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:08.006 18:17:23 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:08.006 18:17:23 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:08.006 18:17:23 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.006 18:17:23 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:08.006 18:17:23 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.006 18:17:23 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:08.264 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.264 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:08.264 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:08.264 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:08.264 18:17:24 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:08.264 18:17:24 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:08.264 18:17:24 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:08.264 18:17:24 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:08.264 18:17:24 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:08.264 18:17:24 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:08.264 18:17:24 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:08.264 00:04:08.264 real 0m9.629s 00:04:08.264 user 0m1.752s 00:04:08.264 sys 0m2.295s 00:04:08.264 18:17:24 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.264 18:17:24 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:08.264 ************************************ 00:04:08.264 END TEST devices 00:04:08.264 ************************************ 00:04:08.264 00:04:08.264 real 0m21.373s 00:04:08.264 user 0m6.905s 00:04:08.265 sys 0m8.895s 00:04:08.265 18:17:24 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:08.265 18:17:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.265 ************************************ 00:04:08.265 END TEST setup.sh 00:04:08.265 ************************************ 00:04:08.265 18:17:24 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:08.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.831 Hugepages 00:04:08.831 node hugesize free / total 00:04:08.831 node0 1048576kB 0 / 0 00:04:08.831 node0 2048kB 2048 / 2048 00:04:08.831 00:04:08.831 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:09.088 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:09.088 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:09.088 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:09.088 18:17:24 -- spdk/autotest.sh@130 -- # uname -s 00:04:09.088 18:17:24 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:09.088 18:17:24 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:09.088 18:17:24 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.019 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.019 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.019 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.019 18:17:25 -- common/autotest_common.sh@1528 -- # sleep 1 00:04:10.951 18:17:26 -- common/autotest_common.sh@1529 -- # bdfs=() 00:04:10.951 18:17:26 -- common/autotest_common.sh@1529 -- # local bdfs 00:04:10.951 18:17:26 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:04:10.951 18:17:26 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:04:10.951 18:17:26 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:10.951 18:17:26 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:10.951 18:17:26 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:10.951 18:17:26 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:10.951 18:17:26 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:10.951 18:17:26 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:04:10.951 18:17:26 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:10.951 18:17:26 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:11.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.517 Waiting for block devices as requested 00:04:11.517 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.517 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:11.774 18:17:27 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:11.774 18:17:27 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:04:11.774 18:17:27 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:04:11.774 18:17:27 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:11.774 18:17:27 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:11.774 18:17:27 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1553 -- # continue 00:04:11.774 18:17:27 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:04:11.774 18:17:27 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:11.774 18:17:27 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:04:11.774 18:17:27 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # grep oacs 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:04:11.774 18:17:27 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:04:11.774 18:17:27 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:04:11.774 18:17:27 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:04:11.774 18:17:27 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:04:11.774 18:17:27 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:04:11.774 18:17:27 -- common/autotest_common.sh@1553 -- # continue 00:04:11.774 18:17:27 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:11.774 18:17:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:11.774 18:17:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.774 18:17:27 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:11.774 18:17:27 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:11.774 18:17:27 -- common/autotest_common.sh@10 -- # set +x 00:04:11.774 18:17:27 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.340 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.598 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.598 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:12.598 18:17:28 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:12.598 18:17:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:12.598 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.598 18:17:28 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:12.598 18:17:28 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:04:12.598 18:17:28 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:04:12.598 18:17:28 -- common/autotest_common.sh@1573 -- # bdfs=() 00:04:12.598 18:17:28 -- common/autotest_common.sh@1573 -- # local bdfs 00:04:12.598 18:17:28 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:04:12.598 18:17:28 -- common/autotest_common.sh@1509 -- # bdfs=() 00:04:12.598 18:17:28 -- common/autotest_common.sh@1509 -- # local bdfs 00:04:12.598 18:17:28 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:12.598 18:17:28 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:12.598 18:17:28 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:04:12.856 18:17:28 -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:04:12.856 18:17:28 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:12.856 18:17:28 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:12.856 18:17:28 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:12.856 18:17:28 -- common/autotest_common.sh@1576 -- # device=0x0010 00:04:12.856 18:17:28 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.856 18:17:28 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:04:12.856 18:17:28 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:12.856 18:17:28 -- common/autotest_common.sh@1576 -- # device=0x0010 00:04:12.856 18:17:28 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:12.856 18:17:28 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:04:12.856 18:17:28 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:04:12.856 18:17:28 -- common/autotest_common.sh@1589 -- # return 0 00:04:12.856 18:17:28 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:12.856 18:17:28 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:12.856 18:17:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:12.856 18:17:28 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:12.856 18:17:28 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:12.856 18:17:28 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:12.856 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.856 18:17:28 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.856 18:17:28 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.856 18:17:28 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.856 18:17:28 -- common/autotest_common.sh@10 -- # set +x 00:04:12.856 ************************************ 00:04:12.856 START TEST env 00:04:12.856 ************************************ 00:04:12.856 18:17:28 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:12.856 * Looking for test storage... 00:04:12.856 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:12.856 18:17:28 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.856 18:17:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:12.856 18:17:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:12.856 18:17:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:12.856 ************************************ 00:04:12.856 START TEST env_memory 00:04:12.856 ************************************ 00:04:12.856 18:17:28 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:12.856 00:04:12.856 00:04:12.856 CUnit - A unit testing framework for C - Version 2.1-3 00:04:12.856 http://cunit.sourceforge.net/ 00:04:12.856 00:04:12.856 00:04:12.856 Suite: memory 00:04:12.856 Test: alloc and free memory map ...[2024-05-13 18:17:28.715033] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:12.856 passed 00:04:12.856 Test: mem map translation ...[2024-05-13 18:17:28.746352] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:12.856 [2024-05-13 18:17:28.746540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:12.856 [2024-05-13 18:17:28.746807] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:12.856 [2024-05-13 18:17:28.746963] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:12.856 passed 00:04:13.114 Test: mem map registration ...[2024-05-13 18:17:28.811057] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:13.114 [2024-05-13 18:17:28.811208] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:13.114 passed 00:04:13.114 Test: mem map adjacent registrations ...passed 00:04:13.114 00:04:13.114 Run Summary: Type Total Ran Passed Failed Inactive 00:04:13.114 suites 1 1 n/a 0 0 00:04:13.114 tests 4 4 4 0 0 00:04:13.114 asserts 152 152 152 0 n/a 00:04:13.114 00:04:13.114 Elapsed time = 0.213 seconds 00:04:13.114 00:04:13.114 real 0m0.229s 00:04:13.114 user 0m0.213s 00:04:13.114 sys 0m0.013s 00:04:13.114 18:17:28 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:13.114 18:17:28 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:13.114 ************************************ 00:04:13.114 END TEST env_memory 00:04:13.114 ************************************ 00:04:13.114 18:17:28 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.114 18:17:28 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:13.114 18:17:28 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:13.114 18:17:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:13.114 ************************************ 00:04:13.114 START TEST env_vtophys 00:04:13.114 ************************************ 00:04:13.114 18:17:28 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:13.114 EAL: lib.eal log level changed from notice to debug 00:04:13.114 EAL: Detected lcore 0 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 1 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 2 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 3 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 4 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 5 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 6 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 7 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 8 as core 0 on socket 0 00:04:13.114 EAL: Detected lcore 9 as core 0 on socket 0 00:04:13.114 EAL: Maximum logical cores by configuration: 128 00:04:13.114 EAL: Detected CPU lcores: 10 00:04:13.114 EAL: Detected NUMA nodes: 1 00:04:13.114 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:13.114 EAL: Detected shared linkage of DPDK 00:04:13.114 EAL: No shared files mode enabled, IPC will be disabled 00:04:13.114 EAL: Selected IOVA mode 'PA' 00:04:13.114 EAL: Probing VFIO support... 00:04:13.114 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.114 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:13.114 EAL: Ask a virtual area of 0x2e000 bytes 00:04:13.114 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:13.114 EAL: Setting up physically contiguous memory... 00:04:13.114 EAL: Setting maximum number of open files to 524288 00:04:13.114 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:13.114 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:13.114 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.114 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:13.114 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.114 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.114 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:13.114 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:13.114 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.114 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:13.114 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.114 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.114 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:13.114 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:13.114 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.114 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:13.114 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.114 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.114 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:13.114 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:13.114 EAL: Ask a virtual area of 0x61000 bytes 00:04:13.114 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:13.114 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:13.114 EAL: Ask a virtual area of 0x400000000 bytes 00:04:13.114 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:13.114 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:13.114 EAL: Hugepages will be freed exactly as allocated. 00:04:13.114 EAL: No shared files mode enabled, IPC is disabled 00:04:13.114 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: TSC frequency is ~2200000 KHz 00:04:13.373 EAL: Main lcore 0 is ready (tid=7f5ce5482a00;cpuset=[0]) 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 0 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 2MB 00:04:13.373 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:13.373 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:13.373 EAL: Mem event callback 'spdk:(nil)' registered 00:04:13.373 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:13.373 00:04:13.373 00:04:13.373 CUnit - A unit testing framework for C - Version 2.1-3 00:04:13.373 http://cunit.sourceforge.net/ 00:04:13.373 00:04:13.373 00:04:13.373 Suite: components_suite 00:04:13.373 Test: vtophys_malloc_test ...passed 00:04:13.373 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 4 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 4MB 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was shrunk by 4MB 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 4 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 6MB 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was shrunk by 6MB 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 4 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 10MB 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was shrunk by 10MB 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 4 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 18MB 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was shrunk by 18MB 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 4 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 34MB 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was shrunk by 34MB 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 4 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 66MB 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was shrunk by 66MB 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.373 EAL: Restoring previous memory policy: 4 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was expanded by 130MB 00:04:13.373 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.373 EAL: request: mp_malloc_sync 00:04:13.373 EAL: No shared files mode enabled, IPC is disabled 00:04:13.373 EAL: Heap on socket 0 was shrunk by 130MB 00:04:13.373 EAL: Trying to obtain current memory policy. 00:04:13.373 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.631 EAL: Restoring previous memory policy: 4 00:04:13.631 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.631 EAL: request: mp_malloc_sync 00:04:13.631 EAL: No shared files mode enabled, IPC is disabled 00:04:13.631 EAL: Heap on socket 0 was expanded by 258MB 00:04:13.631 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.631 EAL: request: mp_malloc_sync 00:04:13.631 EAL: No shared files mode enabled, IPC is disabled 00:04:13.631 EAL: Heap on socket 0 was shrunk by 258MB 00:04:13.631 EAL: Trying to obtain current memory policy. 00:04:13.631 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.631 EAL: Restoring previous memory policy: 4 00:04:13.631 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.631 EAL: request: mp_malloc_sync 00:04:13.631 EAL: No shared files mode enabled, IPC is disabled 00:04:13.631 EAL: Heap on socket 0 was expanded by 514MB 00:04:13.889 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.889 EAL: request: mp_malloc_sync 00:04:13.889 EAL: No shared files mode enabled, IPC is disabled 00:04:13.889 EAL: Heap on socket 0 was shrunk by 514MB 00:04:13.889 EAL: Trying to obtain current memory policy. 00:04:13.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:14.147 EAL: Restoring previous memory policy: 4 00:04:14.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.147 EAL: request: mp_malloc_sync 00:04:14.147 EAL: No shared files mode enabled, IPC is disabled 00:04:14.147 EAL: Heap on socket 0 was expanded by 1026MB 00:04:14.404 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.662 passed 00:04:14.662 00:04:14.662 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.662 suites 1 1 n/a 0 0 00:04:14.662 tests 2 2 2 0 0 00:04:14.662 asserts 5379 5379 5379 0 n/a 00:04:14.662 00:04:14.662 Elapsed time = 1.316 seconds 00:04:14.662 EAL: request: mp_malloc_sync 00:04:14.662 EAL: No shared files mode enabled, IPC is disabled 00:04:14.662 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.662 EAL: request: mp_malloc_sync 00:04:14.662 EAL: No shared files mode enabled, IPC is disabled 00:04:14.662 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.662 EAL: No shared files mode enabled, IPC is disabled 00:04:14.662 EAL: No shared files mode enabled, IPC is disabled 00:04:14.662 EAL: No shared files mode enabled, IPC is disabled 00:04:14.662 ************************************ 00:04:14.662 END TEST env_vtophys 00:04:14.662 ************************************ 00:04:14.662 00:04:14.662 real 0m1.515s 00:04:14.662 user 0m0.822s 00:04:14.662 sys 0m0.551s 00:04:14.662 18:17:30 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.662 18:17:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.662 18:17:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.662 18:17:30 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:14.662 18:17:30 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.662 18:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.662 ************************************ 00:04:14.662 START TEST env_pci 00:04:14.662 ************************************ 00:04:14.662 18:17:30 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.662 00:04:14.662 00:04:14.662 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.662 http://cunit.sourceforge.net/ 00:04:14.662 00:04:14.662 00:04:14.662 Suite: pci 00:04:14.662 Test: pci_hook ...[2024-05-13 18:17:30.521485] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 60403 has claimed it 00:04:14.662 passed 00:04:14.662 00:04:14.662 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.662 suites 1 1 n/a 0 0 00:04:14.662 tests 1 1 1 0 0 00:04:14.662 asserts 25 25 25 0 n/a 00:04:14.662 00:04:14.662 Elapsed time = 0.002 seconds 00:04:14.662 EAL: Cannot find device (10000:00:01.0) 00:04:14.662 EAL: Failed to attach device on primary process 00:04:14.662 ************************************ 00:04:14.662 END TEST env_pci 00:04:14.662 ************************************ 00:04:14.662 00:04:14.662 real 0m0.019s 00:04:14.662 user 0m0.010s 00:04:14.662 sys 0m0.008s 00:04:14.662 18:17:30 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.662 18:17:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:14.662 18:17:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:14.662 18:17:30 env -- env/env.sh@15 -- # uname 00:04:14.662 18:17:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:14.662 18:17:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:14.662 18:17:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.662 18:17:30 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:04:14.662 18:17:30 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.662 18:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.662 ************************************ 00:04:14.662 START TEST env_dpdk_post_init 00:04:14.662 ************************************ 00:04:14.662 18:17:30 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.920 EAL: Detected CPU lcores: 10 00:04:14.920 EAL: Detected NUMA nodes: 1 00:04:14.920 EAL: Detected shared linkage of DPDK 00:04:14.920 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.920 EAL: Selected IOVA mode 'PA' 00:04:14.920 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.920 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:14.920 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:14.920 Starting DPDK initialization... 00:04:14.920 Starting SPDK post initialization... 00:04:14.920 SPDK NVMe probe 00:04:14.920 Attaching to 0000:00:10.0 00:04:14.920 Attaching to 0000:00:11.0 00:04:14.920 Attached to 0000:00:10.0 00:04:14.920 Attached to 0000:00:11.0 00:04:14.920 Cleaning up... 00:04:14.920 00:04:14.920 real 0m0.178s 00:04:14.920 user 0m0.043s 00:04:14.920 sys 0m0.035s 00:04:14.920 18:17:30 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:14.920 ************************************ 00:04:14.920 END TEST env_dpdk_post_init 00:04:14.920 ************************************ 00:04:14.920 18:17:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.920 18:17:30 env -- env/env.sh@26 -- # uname 00:04:14.920 18:17:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.920 18:17:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.920 18:17:30 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:14.920 18:17:30 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:14.920 18:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.920 ************************************ 00:04:14.920 START TEST env_mem_callbacks 00:04:14.920 ************************************ 00:04:14.920 18:17:30 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.920 EAL: Detected CPU lcores: 10 00:04:14.920 EAL: Detected NUMA nodes: 1 00:04:14.920 EAL: Detected shared linkage of DPDK 00:04:14.920 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.920 EAL: Selected IOVA mode 'PA' 00:04:15.179 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.179 00:04:15.179 00:04:15.179 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.179 http://cunit.sourceforge.net/ 00:04:15.179 00:04:15.179 00:04:15.179 Suite: memory 00:04:15.179 Test: test ... 00:04:15.179 register 0x200000200000 2097152 00:04:15.179 malloc 3145728 00:04:15.179 register 0x200000400000 4194304 00:04:15.179 buf 0x200000500000 len 3145728 PASSED 00:04:15.179 malloc 64 00:04:15.179 buf 0x2000004fff40 len 64 PASSED 00:04:15.179 malloc 4194304 00:04:15.179 register 0x200000800000 6291456 00:04:15.179 buf 0x200000a00000 len 4194304 PASSED 00:04:15.179 free 0x200000500000 3145728 00:04:15.179 free 0x2000004fff40 64 00:04:15.179 unregister 0x200000400000 4194304 PASSED 00:04:15.179 free 0x200000a00000 4194304 00:04:15.179 unregister 0x200000800000 6291456 PASSED 00:04:15.179 malloc 8388608 00:04:15.179 register 0x200000400000 10485760 00:04:15.179 buf 0x200000600000 len 8388608 PASSED 00:04:15.179 free 0x200000600000 8388608 00:04:15.179 unregister 0x200000400000 10485760 PASSED 00:04:15.179 passed 00:04:15.179 00:04:15.179 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.179 suites 1 1 n/a 0 0 00:04:15.179 tests 1 1 1 0 0 00:04:15.179 asserts 15 15 15 0 n/a 00:04:15.179 00:04:15.179 Elapsed time = 0.008 seconds 00:04:15.179 00:04:15.179 real 0m0.143s 00:04:15.179 user 0m0.016s 00:04:15.179 sys 0m0.026s 00:04:15.179 18:17:30 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.179 18:17:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.179 ************************************ 00:04:15.179 END TEST env_mem_callbacks 00:04:15.179 ************************************ 00:04:15.179 00:04:15.179 real 0m2.415s 00:04:15.179 user 0m1.215s 00:04:15.179 sys 0m0.836s 00:04:15.179 18:17:30 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:15.179 18:17:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.179 ************************************ 00:04:15.179 END TEST env 00:04:15.179 ************************************ 00:04:15.179 18:17:31 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.179 18:17:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:15.179 18:17:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:15.179 18:17:31 -- common/autotest_common.sh@10 -- # set +x 00:04:15.179 ************************************ 00:04:15.179 START TEST rpc 00:04:15.179 ************************************ 00:04:15.179 18:17:31 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.179 * Looking for test storage... 00:04:15.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.437 18:17:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=60513 00:04:15.437 18:17:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.437 18:17:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 60513 00:04:15.437 18:17:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:15.437 18:17:31 rpc -- common/autotest_common.sh@827 -- # '[' -z 60513 ']' 00:04:15.437 18:17:31 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.437 18:17:31 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:15.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.437 18:17:31 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.437 18:17:31 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:15.437 18:17:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.437 [2024-05-13 18:17:31.184217] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:15.437 [2024-05-13 18:17:31.184312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60513 ] 00:04:15.437 [2024-05-13 18:17:31.317828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.714 [2024-05-13 18:17:31.440613] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.714 [2024-05-13 18:17:31.440662] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60513' to capture a snapshot of events at runtime. 00:04:15.714 [2024-05-13 18:17:31.440674] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.714 [2024-05-13 18:17:31.440692] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.714 [2024-05-13 18:17:31.440699] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60513 for offline analysis/debug. 00:04:15.714 [2024-05-13 18:17:31.440725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.280 18:17:32 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:16.281 18:17:32 rpc -- common/autotest_common.sh@860 -- # return 0 00:04:16.281 18:17:32 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.281 18:17:32 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.281 18:17:32 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.281 18:17:32 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.281 18:17:32 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.281 18:17:32 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.281 18:17:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.281 ************************************ 00:04:16.281 START TEST rpc_integrity 00:04:16.281 ************************************ 00:04:16.281 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:16.281 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.281 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.281 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.281 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.281 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.281 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.539 { 00:04:16.539 "aliases": [ 00:04:16.539 "dff3cad6-9b59-42b8-8fd9-e14d165a3df8" 00:04:16.539 ], 00:04:16.539 "assigned_rate_limits": { 00:04:16.539 "r_mbytes_per_sec": 0, 00:04:16.539 "rw_ios_per_sec": 0, 00:04:16.539 "rw_mbytes_per_sec": 0, 00:04:16.539 "w_mbytes_per_sec": 0 00:04:16.539 }, 00:04:16.539 "block_size": 512, 00:04:16.539 "claimed": false, 00:04:16.539 "driver_specific": {}, 00:04:16.539 "memory_domains": [ 00:04:16.539 { 00:04:16.539 "dma_device_id": "system", 00:04:16.539 "dma_device_type": 1 00:04:16.539 }, 00:04:16.539 { 00:04:16.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.539 "dma_device_type": 2 00:04:16.539 } 00:04:16.539 ], 00:04:16.539 "name": "Malloc0", 00:04:16.539 "num_blocks": 16384, 00:04:16.539 "product_name": "Malloc disk", 00:04:16.539 "supported_io_types": { 00:04:16.539 "abort": true, 00:04:16.539 "compare": false, 00:04:16.539 "compare_and_write": false, 00:04:16.539 "flush": true, 00:04:16.539 "nvme_admin": false, 00:04:16.539 "nvme_io": false, 00:04:16.539 "read": true, 00:04:16.539 "reset": true, 00:04:16.539 "unmap": true, 00:04:16.539 "write": true, 00:04:16.539 "write_zeroes": true 00:04:16.539 }, 00:04:16.539 "uuid": "dff3cad6-9b59-42b8-8fd9-e14d165a3df8", 00:04:16.539 "zoned": false 00:04:16.539 } 00:04:16.539 ]' 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.539 [2024-05-13 18:17:32.348379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.539 [2024-05-13 18:17:32.348448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.539 [2024-05-13 18:17:32.348470] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14b9400 00:04:16.539 [2024-05-13 18:17:32.348481] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.539 [2024-05-13 18:17:32.350250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.539 [2024-05-13 18:17:32.350300] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.539 Passthru0 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.539 { 00:04:16.539 "aliases": [ 00:04:16.539 "dff3cad6-9b59-42b8-8fd9-e14d165a3df8" 00:04:16.539 ], 00:04:16.539 "assigned_rate_limits": { 00:04:16.539 "r_mbytes_per_sec": 0, 00:04:16.539 "rw_ios_per_sec": 0, 00:04:16.539 "rw_mbytes_per_sec": 0, 00:04:16.539 "w_mbytes_per_sec": 0 00:04:16.539 }, 00:04:16.539 "block_size": 512, 00:04:16.539 "claim_type": "exclusive_write", 00:04:16.539 "claimed": true, 00:04:16.539 "driver_specific": {}, 00:04:16.539 "memory_domains": [ 00:04:16.539 { 00:04:16.539 "dma_device_id": "system", 00:04:16.539 "dma_device_type": 1 00:04:16.539 }, 00:04:16.539 { 00:04:16.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.539 "dma_device_type": 2 00:04:16.539 } 00:04:16.539 ], 00:04:16.539 "name": "Malloc0", 00:04:16.539 "num_blocks": 16384, 00:04:16.539 "product_name": "Malloc disk", 00:04:16.539 "supported_io_types": { 00:04:16.539 "abort": true, 00:04:16.539 "compare": false, 00:04:16.539 "compare_and_write": false, 00:04:16.539 "flush": true, 00:04:16.539 "nvme_admin": false, 00:04:16.539 "nvme_io": false, 00:04:16.539 "read": true, 00:04:16.539 "reset": true, 00:04:16.539 "unmap": true, 00:04:16.539 "write": true, 00:04:16.539 "write_zeroes": true 00:04:16.539 }, 00:04:16.539 "uuid": "dff3cad6-9b59-42b8-8fd9-e14d165a3df8", 00:04:16.539 "zoned": false 00:04:16.539 }, 00:04:16.539 { 00:04:16.539 "aliases": [ 00:04:16.539 "f92930ce-6928-51a3-8461-d989507f2be8" 00:04:16.539 ], 00:04:16.539 "assigned_rate_limits": { 00:04:16.539 "r_mbytes_per_sec": 0, 00:04:16.539 "rw_ios_per_sec": 0, 00:04:16.539 "rw_mbytes_per_sec": 0, 00:04:16.539 "w_mbytes_per_sec": 0 00:04:16.539 }, 00:04:16.539 "block_size": 512, 00:04:16.539 "claimed": false, 00:04:16.539 "driver_specific": { 00:04:16.539 "passthru": { 00:04:16.539 "base_bdev_name": "Malloc0", 00:04:16.539 "name": "Passthru0" 00:04:16.539 } 00:04:16.539 }, 00:04:16.539 "memory_domains": [ 00:04:16.539 { 00:04:16.539 "dma_device_id": "system", 00:04:16.539 "dma_device_type": 1 00:04:16.539 }, 00:04:16.539 { 00:04:16.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.539 "dma_device_type": 2 00:04:16.539 } 00:04:16.539 ], 00:04:16.539 "name": "Passthru0", 00:04:16.539 "num_blocks": 16384, 00:04:16.539 "product_name": "passthru", 00:04:16.539 "supported_io_types": { 00:04:16.539 "abort": true, 00:04:16.539 "compare": false, 00:04:16.539 "compare_and_write": false, 00:04:16.539 "flush": true, 00:04:16.539 "nvme_admin": false, 00:04:16.539 "nvme_io": false, 00:04:16.539 "read": true, 00:04:16.539 "reset": true, 00:04:16.539 "unmap": true, 00:04:16.539 "write": true, 00:04:16.539 "write_zeroes": true 00:04:16.539 }, 00:04:16.539 "uuid": "f92930ce-6928-51a3-8461-d989507f2be8", 00:04:16.539 "zoned": false 00:04:16.539 } 00:04:16.539 ]' 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.539 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.539 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.540 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.540 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.540 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.540 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.540 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.540 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.540 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.798 18:17:32 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.798 00:04:16.798 real 0m0.314s 00:04:16.798 user 0m0.206s 00:04:16.798 sys 0m0.038s 00:04:16.798 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.798 18:17:32 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.798 ************************************ 00:04:16.798 END TEST rpc_integrity 00:04:16.798 ************************************ 00:04:16.798 18:17:32 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.798 18:17:32 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:16.798 18:17:32 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:16.798 18:17:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.798 ************************************ 00:04:16.798 START TEST rpc_plugins 00:04:16.798 ************************************ 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.798 { 00:04:16.798 "aliases": [ 00:04:16.798 "e7a9ef53-52a5-4a58-a5f9-ac88c4eb3a33" 00:04:16.798 ], 00:04:16.798 "assigned_rate_limits": { 00:04:16.798 "r_mbytes_per_sec": 0, 00:04:16.798 "rw_ios_per_sec": 0, 00:04:16.798 "rw_mbytes_per_sec": 0, 00:04:16.798 "w_mbytes_per_sec": 0 00:04:16.798 }, 00:04:16.798 "block_size": 4096, 00:04:16.798 "claimed": false, 00:04:16.798 "driver_specific": {}, 00:04:16.798 "memory_domains": [ 00:04:16.798 { 00:04:16.798 "dma_device_id": "system", 00:04:16.798 "dma_device_type": 1 00:04:16.798 }, 00:04:16.798 { 00:04:16.798 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.798 "dma_device_type": 2 00:04:16.798 } 00:04:16.798 ], 00:04:16.798 "name": "Malloc1", 00:04:16.798 "num_blocks": 256, 00:04:16.798 "product_name": "Malloc disk", 00:04:16.798 "supported_io_types": { 00:04:16.798 "abort": true, 00:04:16.798 "compare": false, 00:04:16.798 "compare_and_write": false, 00:04:16.798 "flush": true, 00:04:16.798 "nvme_admin": false, 00:04:16.798 "nvme_io": false, 00:04:16.798 "read": true, 00:04:16.798 "reset": true, 00:04:16.798 "unmap": true, 00:04:16.798 "write": true, 00:04:16.798 "write_zeroes": true 00:04:16.798 }, 00:04:16.798 "uuid": "e7a9ef53-52a5-4a58-a5f9-ac88c4eb3a33", 00:04:16.798 "zoned": false 00:04:16.798 } 00:04:16.798 ]' 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.798 18:17:32 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.798 00:04:16.798 real 0m0.178s 00:04:16.798 user 0m0.122s 00:04:16.798 sys 0m0.016s 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:16.798 ************************************ 00:04:16.798 END TEST rpc_plugins 00:04:16.798 18:17:32 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.798 ************************************ 00:04:17.056 18:17:32 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:17.056 18:17:32 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.056 18:17:32 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.056 18:17:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.056 ************************************ 00:04:17.056 START TEST rpc_trace_cmd_test 00:04:17.056 ************************************ 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:17.056 "bdev": { 00:04:17.056 "mask": "0x8", 00:04:17.056 "tpoint_mask": "0xffffffffffffffff" 00:04:17.056 }, 00:04:17.056 "bdev_nvme": { 00:04:17.056 "mask": "0x4000", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "blobfs": { 00:04:17.056 "mask": "0x80", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "dsa": { 00:04:17.056 "mask": "0x200", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "ftl": { 00:04:17.056 "mask": "0x40", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "iaa": { 00:04:17.056 "mask": "0x1000", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "iscsi_conn": { 00:04:17.056 "mask": "0x2", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "nvme_pcie": { 00:04:17.056 "mask": "0x800", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "nvme_tcp": { 00:04:17.056 "mask": "0x2000", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "nvmf_rdma": { 00:04:17.056 "mask": "0x10", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "nvmf_tcp": { 00:04:17.056 "mask": "0x20", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "scsi": { 00:04:17.056 "mask": "0x4", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "sock": { 00:04:17.056 "mask": "0x8000", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "thread": { 00:04:17.056 "mask": "0x400", 00:04:17.056 "tpoint_mask": "0x0" 00:04:17.056 }, 00:04:17.056 "tpoint_group_mask": "0x8", 00:04:17.056 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60513" 00:04:17.056 }' 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:17.056 18:17:32 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:17.314 18:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:17.314 18:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:17.314 18:17:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:17.314 00:04:17.314 real 0m0.287s 00:04:17.314 user 0m0.252s 00:04:17.314 sys 0m0.025s 00:04:17.314 18:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.314 18:17:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 ************************************ 00:04:17.314 END TEST rpc_trace_cmd_test 00:04:17.314 ************************************ 00:04:17.314 18:17:33 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:04:17.314 18:17:33 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:04:17.314 18:17:33 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.314 18:17:33 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.314 18:17:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 ************************************ 00:04:17.314 START TEST go_rpc 00:04:17.314 ************************************ 00:04:17.314 18:17:33 rpc.go_rpc -- common/autotest_common.sh@1121 -- # go_rpc 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.314 18:17:33 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.314 18:17:33 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.314 18:17:33 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["9cf5644e-8824-4102-937c-f29af9d91bf2"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"9cf5644e-8824-4102-937c-f29af9d91bf2","zoned":false}]' 00:04:17.314 18:17:33 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:04:17.573 18:17:33 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:04:17.573 18:17:33 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.573 18:17:33 rpc.go_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.573 18:17:33 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 18:17:33 rpc.go_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.573 18:17:33 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:04:17.573 18:17:33 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:04:17.573 18:17:33 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:04:17.573 18:17:33 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:04:17.573 00:04:17.573 real 0m0.202s 00:04:17.573 user 0m0.147s 00:04:17.573 sys 0m0.026s 00:04:17.573 18:17:33 rpc.go_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.573 ************************************ 00:04:17.573 END TEST go_rpc 00:04:17.573 18:17:33 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 ************************************ 00:04:17.573 18:17:33 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.573 18:17:33 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.573 18:17:33 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:17.573 18:17:33 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:17.573 18:17:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 ************************************ 00:04:17.573 START TEST rpc_daemon_integrity 00:04:17.573 ************************************ 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.573 { 00:04:17.573 "aliases": [ 00:04:17.573 "58aee157-1205-4c08-bf54-0662072ebba6" 00:04:17.573 ], 00:04:17.573 "assigned_rate_limits": { 00:04:17.573 "r_mbytes_per_sec": 0, 00:04:17.573 "rw_ios_per_sec": 0, 00:04:17.573 "rw_mbytes_per_sec": 0, 00:04:17.573 "w_mbytes_per_sec": 0 00:04:17.573 }, 00:04:17.573 "block_size": 512, 00:04:17.573 "claimed": false, 00:04:17.573 "driver_specific": {}, 00:04:17.573 "memory_domains": [ 00:04:17.573 { 00:04:17.573 "dma_device_id": "system", 00:04:17.573 "dma_device_type": 1 00:04:17.573 }, 00:04:17.573 { 00:04:17.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.573 "dma_device_type": 2 00:04:17.573 } 00:04:17.573 ], 00:04:17.573 "name": "Malloc3", 00:04:17.573 "num_blocks": 16384, 00:04:17.573 "product_name": "Malloc disk", 00:04:17.573 "supported_io_types": { 00:04:17.573 "abort": true, 00:04:17.573 "compare": false, 00:04:17.573 "compare_and_write": false, 00:04:17.573 "flush": true, 00:04:17.573 "nvme_admin": false, 00:04:17.573 "nvme_io": false, 00:04:17.573 "read": true, 00:04:17.573 "reset": true, 00:04:17.573 "unmap": true, 00:04:17.573 "write": true, 00:04:17.573 "write_zeroes": true 00:04:17.573 }, 00:04:17.573 "uuid": "58aee157-1205-4c08-bf54-0662072ebba6", 00:04:17.573 "zoned": false 00:04:17.573 } 00:04:17.573 ]' 00:04:17.573 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.831 [2024-05-13 18:17:33.534829] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:17.831 [2024-05-13 18:17:33.534878] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.831 [2024-05-13 18:17:33.534906] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16510a0 00:04:17.831 [2024-05-13 18:17:33.534915] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.831 [2024-05-13 18:17:33.536280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.831 [2024-05-13 18:17:33.536327] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.831 Passthru0 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.831 { 00:04:17.831 "aliases": [ 00:04:17.831 "58aee157-1205-4c08-bf54-0662072ebba6" 00:04:17.831 ], 00:04:17.831 "assigned_rate_limits": { 00:04:17.831 "r_mbytes_per_sec": 0, 00:04:17.831 "rw_ios_per_sec": 0, 00:04:17.831 "rw_mbytes_per_sec": 0, 00:04:17.831 "w_mbytes_per_sec": 0 00:04:17.831 }, 00:04:17.831 "block_size": 512, 00:04:17.831 "claim_type": "exclusive_write", 00:04:17.831 "claimed": true, 00:04:17.831 "driver_specific": {}, 00:04:17.831 "memory_domains": [ 00:04:17.831 { 00:04:17.831 "dma_device_id": "system", 00:04:17.831 "dma_device_type": 1 00:04:17.831 }, 00:04:17.831 { 00:04:17.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.831 "dma_device_type": 2 00:04:17.831 } 00:04:17.831 ], 00:04:17.831 "name": "Malloc3", 00:04:17.831 "num_blocks": 16384, 00:04:17.831 "product_name": "Malloc disk", 00:04:17.831 "supported_io_types": { 00:04:17.831 "abort": true, 00:04:17.831 "compare": false, 00:04:17.831 "compare_and_write": false, 00:04:17.831 "flush": true, 00:04:17.831 "nvme_admin": false, 00:04:17.831 "nvme_io": false, 00:04:17.831 "read": true, 00:04:17.831 "reset": true, 00:04:17.831 "unmap": true, 00:04:17.831 "write": true, 00:04:17.831 "write_zeroes": true 00:04:17.831 }, 00:04:17.831 "uuid": "58aee157-1205-4c08-bf54-0662072ebba6", 00:04:17.831 "zoned": false 00:04:17.831 }, 00:04:17.831 { 00:04:17.831 "aliases": [ 00:04:17.831 "e2f1d2e1-fb1b-5c7b-9372-ea264bdae37f" 00:04:17.831 ], 00:04:17.831 "assigned_rate_limits": { 00:04:17.831 "r_mbytes_per_sec": 0, 00:04:17.831 "rw_ios_per_sec": 0, 00:04:17.831 "rw_mbytes_per_sec": 0, 00:04:17.831 "w_mbytes_per_sec": 0 00:04:17.831 }, 00:04:17.831 "block_size": 512, 00:04:17.831 "claimed": false, 00:04:17.831 "driver_specific": { 00:04:17.831 "passthru": { 00:04:17.831 "base_bdev_name": "Malloc3", 00:04:17.831 "name": "Passthru0" 00:04:17.831 } 00:04:17.831 }, 00:04:17.831 "memory_domains": [ 00:04:17.831 { 00:04:17.831 "dma_device_id": "system", 00:04:17.831 "dma_device_type": 1 00:04:17.831 }, 00:04:17.831 { 00:04:17.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.831 "dma_device_type": 2 00:04:17.831 } 00:04:17.831 ], 00:04:17.831 "name": "Passthru0", 00:04:17.831 "num_blocks": 16384, 00:04:17.831 "product_name": "passthru", 00:04:17.831 "supported_io_types": { 00:04:17.831 "abort": true, 00:04:17.831 "compare": false, 00:04:17.831 "compare_and_write": false, 00:04:17.831 "flush": true, 00:04:17.831 "nvme_admin": false, 00:04:17.831 "nvme_io": false, 00:04:17.831 "read": true, 00:04:17.831 "reset": true, 00:04:17.831 "unmap": true, 00:04:17.831 "write": true, 00:04:17.831 "write_zeroes": true 00:04:17.831 }, 00:04:17.831 "uuid": "e2f1d2e1-fb1b-5c7b-9372-ea264bdae37f", 00:04:17.831 "zoned": false 00:04:17.831 } 00:04:17.831 ]' 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.831 00:04:17.831 real 0m0.319s 00:04:17.831 user 0m0.214s 00:04:17.831 sys 0m0.038s 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:17.831 ************************************ 00:04:17.831 18:17:33 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.831 END TEST rpc_daemon_integrity 00:04:17.831 ************************************ 00:04:17.831 18:17:33 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.832 18:17:33 rpc -- rpc/rpc.sh@84 -- # killprocess 60513 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@946 -- # '[' -z 60513 ']' 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@950 -- # kill -0 60513 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@951 -- # uname 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60513 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60513' 00:04:17.832 killing process with pid 60513 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@965 -- # kill 60513 00:04:17.832 18:17:33 rpc -- common/autotest_common.sh@970 -- # wait 60513 00:04:18.398 00:04:18.398 real 0m3.163s 00:04:18.398 user 0m4.220s 00:04:18.398 sys 0m0.707s 00:04:18.398 18:17:34 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:18.398 18:17:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.398 ************************************ 00:04:18.398 END TEST rpc 00:04:18.398 ************************************ 00:04:18.398 18:17:34 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.398 18:17:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.398 18:17:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.398 18:17:34 -- common/autotest_common.sh@10 -- # set +x 00:04:18.398 ************************************ 00:04:18.398 START TEST skip_rpc 00:04:18.398 ************************************ 00:04:18.398 18:17:34 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.398 * Looking for test storage... 00:04:18.398 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.398 18:17:34 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.398 18:17:34 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:18.398 18:17:34 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:18.398 18:17:34 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:18.398 18:17:34 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:18.398 18:17:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.398 ************************************ 00:04:18.398 START TEST skip_rpc 00:04:18.398 ************************************ 00:04:18.398 18:17:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:04:18.398 18:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60774 00:04:18.398 18:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.398 18:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:18.398 18:17:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:18.656 [2024-05-13 18:17:34.393956] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:18.656 [2024-05-13 18:17:34.394055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60774 ] 00:04:18.656 [2024-05-13 18:17:34.539489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.915 [2024-05-13 18:17:34.688638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.205 2024/05/13 18:17:39 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 60774 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 60774 ']' 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 60774 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60774 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:24.205 killing process with pid 60774 00:04:24.205 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60774' 00:04:24.206 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 60774 00:04:24.206 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 60774 00:04:24.206 00:04:24.206 real 0m5.475s 00:04:24.206 user 0m5.074s 00:04:24.206 sys 0m0.297s 00:04:24.206 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:24.206 18:17:39 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.206 ************************************ 00:04:24.206 END TEST skip_rpc 00:04:24.206 ************************************ 00:04:24.206 18:17:39 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:24.206 18:17:39 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:24.206 18:17:39 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:24.206 18:17:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.206 ************************************ 00:04:24.206 START TEST skip_rpc_with_json 00:04:24.206 ************************************ 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60872 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 60872 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 60872 ']' 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:24.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:24.206 18:17:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.206 [2024-05-13 18:17:39.922382] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:24.206 [2024-05-13 18:17:39.922657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60872 ] 00:04:24.206 [2024-05-13 18:17:40.058423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.464 [2024-05-13 18:17:40.178448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.031 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:25.031 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:04:25.031 18:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:25.031 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.031 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.031 [2024-05-13 18:17:40.965731] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:25.031 2024/05/13 18:17:40 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:04:25.031 request: 00:04:25.031 { 00:04:25.031 "method": "nvmf_get_transports", 00:04:25.031 "params": { 00:04:25.031 "trtype": "tcp" 00:04:25.031 } 00:04:25.031 } 00:04:25.031 Got JSON-RPC error response 00:04:25.031 GoRPCClient: error on JSON-RPC call 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.289 [2024-05-13 18:17:40.977838] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:25.289 18:17:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.289 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:25.289 18:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.289 { 00:04:25.289 "subsystems": [ 00:04:25.289 { 00:04:25.289 "subsystem": "vfio_user_target", 00:04:25.290 "config": null 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "keyring", 00:04:25.290 "config": [] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "iobuf", 00:04:25.290 "config": [ 00:04:25.290 { 00:04:25.290 "method": "iobuf_set_options", 00:04:25.290 "params": { 00:04:25.290 "large_bufsize": 135168, 00:04:25.290 "large_pool_count": 1024, 00:04:25.290 "small_bufsize": 8192, 00:04:25.290 "small_pool_count": 8192 00:04:25.290 } 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "sock", 00:04:25.290 "config": [ 00:04:25.290 { 00:04:25.290 "method": "sock_impl_set_options", 00:04:25.290 "params": { 00:04:25.290 "enable_ktls": false, 00:04:25.290 "enable_placement_id": 0, 00:04:25.290 "enable_quickack": false, 00:04:25.290 "enable_recv_pipe": true, 00:04:25.290 "enable_zerocopy_send_client": false, 00:04:25.290 "enable_zerocopy_send_server": true, 00:04:25.290 "impl_name": "posix", 00:04:25.290 "recv_buf_size": 2097152, 00:04:25.290 "send_buf_size": 2097152, 00:04:25.290 "tls_version": 0, 00:04:25.290 "zerocopy_threshold": 0 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "sock_impl_set_options", 00:04:25.290 "params": { 00:04:25.290 "enable_ktls": false, 00:04:25.290 "enable_placement_id": 0, 00:04:25.290 "enable_quickack": false, 00:04:25.290 "enable_recv_pipe": true, 00:04:25.290 "enable_zerocopy_send_client": false, 00:04:25.290 "enable_zerocopy_send_server": true, 00:04:25.290 "impl_name": "ssl", 00:04:25.290 "recv_buf_size": 4096, 00:04:25.290 "send_buf_size": 4096, 00:04:25.290 "tls_version": 0, 00:04:25.290 "zerocopy_threshold": 0 00:04:25.290 } 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "vmd", 00:04:25.290 "config": [] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "accel", 00:04:25.290 "config": [ 00:04:25.290 { 00:04:25.290 "method": "accel_set_options", 00:04:25.290 "params": { 00:04:25.290 "buf_count": 2048, 00:04:25.290 "large_cache_size": 16, 00:04:25.290 "sequence_count": 2048, 00:04:25.290 "small_cache_size": 128, 00:04:25.290 "task_count": 2048 00:04:25.290 } 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "bdev", 00:04:25.290 "config": [ 00:04:25.290 { 00:04:25.290 "method": "bdev_set_options", 00:04:25.290 "params": { 00:04:25.290 "bdev_auto_examine": true, 00:04:25.290 "bdev_io_cache_size": 256, 00:04:25.290 "bdev_io_pool_size": 65535, 00:04:25.290 "iobuf_large_cache_size": 16, 00:04:25.290 "iobuf_small_cache_size": 128 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "bdev_raid_set_options", 00:04:25.290 "params": { 00:04:25.290 "process_window_size_kb": 1024 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "bdev_iscsi_set_options", 00:04:25.290 "params": { 00:04:25.290 "timeout_sec": 30 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "bdev_nvme_set_options", 00:04:25.290 "params": { 00:04:25.290 "action_on_timeout": "none", 00:04:25.290 "allow_accel_sequence": false, 00:04:25.290 "arbitration_burst": 0, 00:04:25.290 "bdev_retry_count": 3, 00:04:25.290 "ctrlr_loss_timeout_sec": 0, 00:04:25.290 "delay_cmd_submit": true, 00:04:25.290 "dhchap_dhgroups": [ 00:04:25.290 "null", 00:04:25.290 "ffdhe2048", 00:04:25.290 "ffdhe3072", 00:04:25.290 "ffdhe4096", 00:04:25.290 "ffdhe6144", 00:04:25.290 "ffdhe8192" 00:04:25.290 ], 00:04:25.290 "dhchap_digests": [ 00:04:25.290 "sha256", 00:04:25.290 "sha384", 00:04:25.290 "sha512" 00:04:25.290 ], 00:04:25.290 "disable_auto_failback": false, 00:04:25.290 "fast_io_fail_timeout_sec": 0, 00:04:25.290 "generate_uuids": false, 00:04:25.290 "high_priority_weight": 0, 00:04:25.290 "io_path_stat": false, 00:04:25.290 "io_queue_requests": 0, 00:04:25.290 "keep_alive_timeout_ms": 10000, 00:04:25.290 "low_priority_weight": 0, 00:04:25.290 "medium_priority_weight": 0, 00:04:25.290 "nvme_adminq_poll_period_us": 10000, 00:04:25.290 "nvme_error_stat": false, 00:04:25.290 "nvme_ioq_poll_period_us": 0, 00:04:25.290 "rdma_cm_event_timeout_ms": 0, 00:04:25.290 "rdma_max_cq_size": 0, 00:04:25.290 "rdma_srq_size": 0, 00:04:25.290 "reconnect_delay_sec": 0, 00:04:25.290 "timeout_admin_us": 0, 00:04:25.290 "timeout_us": 0, 00:04:25.290 "transport_ack_timeout": 0, 00:04:25.290 "transport_retry_count": 4, 00:04:25.290 "transport_tos": 0 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "bdev_nvme_set_hotplug", 00:04:25.290 "params": { 00:04:25.290 "enable": false, 00:04:25.290 "period_us": 100000 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "bdev_wait_for_examine" 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "scsi", 00:04:25.290 "config": null 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "scheduler", 00:04:25.290 "config": [ 00:04:25.290 { 00:04:25.290 "method": "framework_set_scheduler", 00:04:25.290 "params": { 00:04:25.290 "name": "static" 00:04:25.290 } 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "vhost_scsi", 00:04:25.290 "config": [] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "vhost_blk", 00:04:25.290 "config": [] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "ublk", 00:04:25.290 "config": [] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "nbd", 00:04:25.290 "config": [] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "nvmf", 00:04:25.290 "config": [ 00:04:25.290 { 00:04:25.290 "method": "nvmf_set_config", 00:04:25.290 "params": { 00:04:25.290 "admin_cmd_passthru": { 00:04:25.290 "identify_ctrlr": false 00:04:25.290 }, 00:04:25.290 "discovery_filter": "match_any" 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "nvmf_set_max_subsystems", 00:04:25.290 "params": { 00:04:25.290 "max_subsystems": 1024 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "nvmf_set_crdt", 00:04:25.290 "params": { 00:04:25.290 "crdt1": 0, 00:04:25.290 "crdt2": 0, 00:04:25.290 "crdt3": 0 00:04:25.290 } 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "method": "nvmf_create_transport", 00:04:25.290 "params": { 00:04:25.290 "abort_timeout_sec": 1, 00:04:25.290 "ack_timeout": 0, 00:04:25.290 "buf_cache_size": 4294967295, 00:04:25.290 "c2h_success": true, 00:04:25.290 "data_wr_pool_size": 0, 00:04:25.290 "dif_insert_or_strip": false, 00:04:25.290 "in_capsule_data_size": 4096, 00:04:25.290 "io_unit_size": 131072, 00:04:25.290 "max_aq_depth": 128, 00:04:25.290 "max_io_qpairs_per_ctrlr": 127, 00:04:25.290 "max_io_size": 131072, 00:04:25.290 "max_queue_depth": 128, 00:04:25.290 "num_shared_buffers": 511, 00:04:25.290 "sock_priority": 0, 00:04:25.290 "trtype": "TCP", 00:04:25.290 "zcopy": false 00:04:25.290 } 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 }, 00:04:25.290 { 00:04:25.290 "subsystem": "iscsi", 00:04:25.290 "config": [ 00:04:25.290 { 00:04:25.290 "method": "iscsi_set_options", 00:04:25.290 "params": { 00:04:25.290 "allow_duplicated_isid": false, 00:04:25.290 "chap_group": 0, 00:04:25.290 "data_out_pool_size": 2048, 00:04:25.290 "default_time2retain": 20, 00:04:25.290 "default_time2wait": 2, 00:04:25.290 "disable_chap": false, 00:04:25.290 "error_recovery_level": 0, 00:04:25.290 "first_burst_length": 8192, 00:04:25.290 "immediate_data": true, 00:04:25.290 "immediate_data_pool_size": 16384, 00:04:25.290 "max_connections_per_session": 2, 00:04:25.290 "max_large_datain_per_connection": 64, 00:04:25.290 "max_queue_depth": 64, 00:04:25.290 "max_r2t_per_connection": 4, 00:04:25.290 "max_sessions": 128, 00:04:25.290 "mutual_chap": false, 00:04:25.290 "node_base": "iqn.2016-06.io.spdk", 00:04:25.290 "nop_in_interval": 30, 00:04:25.290 "nop_timeout": 60, 00:04:25.290 "pdu_pool_size": 36864, 00:04:25.290 "require_chap": false 00:04:25.290 } 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 } 00:04:25.290 ] 00:04:25.290 } 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 60872 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60872 ']' 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60872 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60872 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60872' 00:04:25.290 killing process with pid 60872 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60872 00:04:25.290 18:17:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60872 00:04:25.857 18:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60906 00:04:25.857 18:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.857 18:17:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 60906 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 60906 ']' 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 60906 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 60906 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:31.144 killing process with pid 60906 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 60906' 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 60906 00:04:31.144 18:17:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 60906 00:04:31.144 18:17:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:31.144 18:17:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:31.144 00:04:31.144 real 0m7.201s 00:04:31.144 user 0m6.936s 00:04:31.144 sys 0m0.694s 00:04:31.144 18:17:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.144 ************************************ 00:04:31.144 END TEST skip_rpc_with_json 00:04:31.144 ************************************ 00:04:31.144 18:17:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:31.402 18:17:47 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:31.402 18:17:47 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.402 18:17:47 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.402 18:17:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.402 ************************************ 00:04:31.402 START TEST skip_rpc_with_delay 00:04:31.402 ************************************ 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:31.402 [2024-05-13 18:17:47.180609] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:31.402 [2024-05-13 18:17:47.180759] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:31.402 00:04:31.402 real 0m0.090s 00:04:31.402 user 0m0.062s 00:04:31.402 sys 0m0.027s 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.402 18:17:47 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:31.402 ************************************ 00:04:31.402 END TEST skip_rpc_with_delay 00:04:31.402 ************************************ 00:04:31.402 18:17:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:31.402 18:17:47 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:31.402 18:17:47 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:31.402 18:17:47 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.402 18:17:47 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.402 18:17:47 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.402 ************************************ 00:04:31.402 START TEST exit_on_failed_rpc_init 00:04:31.402 ************************************ 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=61021 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 61021 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 61021 ']' 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:31.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:31.402 18:17:47 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:31.402 [2024-05-13 18:17:47.325023] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:31.402 [2024-05-13 18:17:47.325130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61021 ] 00:04:31.660 [2024-05-13 18:17:47.462929] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.660 [2024-05-13 18:17:47.581534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:32.593 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:32.593 [2024-05-13 18:17:48.425587] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:32.593 [2024-05-13 18:17:48.425685] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61051 ] 00:04:32.851 [2024-05-13 18:17:48.565173] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.851 [2024-05-13 18:17:48.690193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.851 [2024-05-13 18:17:48.690299] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:32.851 [2024-05-13 18:17:48.690316] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:32.851 [2024-05-13 18:17:48.690327] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 61021 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 61021 ']' 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 61021 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61021 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61021' 00:04:33.110 killing process with pid 61021 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 61021 00:04:33.110 18:17:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 61021 00:04:33.368 00:04:33.368 real 0m2.014s 00:04:33.368 user 0m2.388s 00:04:33.368 sys 0m0.477s 00:04:33.368 18:17:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.368 18:17:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.368 ************************************ 00:04:33.368 END TEST exit_on_failed_rpc_init 00:04:33.368 ************************************ 00:04:33.368 18:17:49 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:33.626 00:04:33.626 real 0m15.057s 00:04:33.626 user 0m14.557s 00:04:33.626 sys 0m1.658s 00:04:33.626 18:17:49 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.626 18:17:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 ************************************ 00:04:33.626 END TEST skip_rpc 00:04:33.626 ************************************ 00:04:33.626 18:17:49 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:33.626 18:17:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.626 18:17:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.626 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 ************************************ 00:04:33.626 START TEST rpc_client 00:04:33.626 ************************************ 00:04:33.626 18:17:49 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:33.626 * Looking for test storage... 00:04:33.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:33.626 18:17:49 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:33.626 OK 00:04:33.626 18:17:49 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:33.626 00:04:33.626 real 0m0.099s 00:04:33.626 user 0m0.052s 00:04:33.626 sys 0m0.052s 00:04:33.626 18:17:49 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:33.626 18:17:49 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 ************************************ 00:04:33.626 END TEST rpc_client 00:04:33.626 ************************************ 00:04:33.626 18:17:49 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:33.626 18:17:49 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:33.626 18:17:49 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:33.626 18:17:49 -- common/autotest_common.sh@10 -- # set +x 00:04:33.626 ************************************ 00:04:33.626 START TEST json_config 00:04:33.626 ************************************ 00:04:33.626 18:17:49 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:33.885 18:17:49 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.885 18:17:49 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.885 18:17:49 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.885 18:17:49 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.885 18:17:49 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.885 18:17:49 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.885 18:17:49 json_config -- paths/export.sh@5 -- # export PATH 00:04:33.885 18:17:49 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@47 -- # : 0 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:33.885 18:17:49 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.885 INFO: JSON configuration test init 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.885 18:17:49 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:33.885 18:17:49 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.885 18:17:49 json_config -- json_config/common.sh@10 -- # shift 00:04:33.885 18:17:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.885 18:17:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.885 18:17:49 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.885 18:17:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.885 18:17:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.885 18:17:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61169 00:04:33.885 18:17:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.885 Waiting for target to run... 00:04:33.885 18:17:49 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:33.885 18:17:49 json_config -- json_config/common.sh@25 -- # waitforlisten 61169 /var/tmp/spdk_tgt.sock 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@827 -- # '[' -z 61169 ']' 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:33.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:33.885 18:17:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.885 [2024-05-13 18:17:49.668112] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:33.885 [2024-05-13 18:17:49.668212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61169 ] 00:04:34.451 [2024-05-13 18:17:50.104058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.451 [2024-05-13 18:17:50.198554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.709 18:17:50 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:34.709 18:17:50 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:34.709 00:04:34.709 18:17:50 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.709 18:17:50 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:34.709 18:17:50 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:34.709 18:17:50 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:34.709 18:17:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.709 18:17:50 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:34.709 18:17:50 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:34.709 18:17:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:34.709 18:17:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:34.967 18:17:50 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:34.967 18:17:50 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:34.967 18:17:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:35.532 18:17:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:35.532 18:17:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:35.532 18:17:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:35.532 18:17:51 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:35.532 18:17:51 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.532 18:17:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:35.790 18:17:51 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:35.790 18:17:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:35.790 18:17:51 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.790 18:17:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:36.048 MallocForNvmf0 00:04:36.048 18:17:51 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:36.048 18:17:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:36.306 MallocForNvmf1 00:04:36.306 18:17:52 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:36.306 18:17:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:36.306 [2024-05-13 18:17:52.248984] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:36.564 18:17:52 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:36.564 18:17:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:36.822 18:17:52 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.822 18:17:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.822 18:17:52 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.822 18:17:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:37.080 18:17:53 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:37.080 18:17:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:37.336 [2024-05-13 18:17:53.229288] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:37.336 [2024-05-13 18:17:53.229546] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:37.336 18:17:53 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:37.336 18:17:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.336 18:17:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.594 18:17:53 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:37.594 18:17:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.594 18:17:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.594 18:17:53 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:37.594 18:17:53 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:37.594 18:17:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:37.852 MallocBdevForConfigChangeCheck 00:04:37.852 18:17:53 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:37.852 18:17:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:37.852 18:17:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.852 18:17:53 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:37.852 18:17:53 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.416 INFO: shutting down applications... 00:04:38.416 18:17:54 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:38.416 18:17:54 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:38.416 18:17:54 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:38.416 18:17:54 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:38.416 18:17:54 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:38.675 Calling clear_iscsi_subsystem 00:04:38.675 Calling clear_nvmf_subsystem 00:04:38.675 Calling clear_nbd_subsystem 00:04:38.675 Calling clear_ublk_subsystem 00:04:38.675 Calling clear_vhost_blk_subsystem 00:04:38.675 Calling clear_vhost_scsi_subsystem 00:04:38.675 Calling clear_bdev_subsystem 00:04:38.675 18:17:54 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:38.675 18:17:54 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:38.675 18:17:54 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:38.675 18:17:54 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:38.675 18:17:54 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:38.675 18:17:54 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:38.933 18:17:54 json_config -- json_config/json_config.sh@345 -- # break 00:04:38.933 18:17:54 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:38.933 18:17:54 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:38.933 18:17:54 json_config -- json_config/common.sh@31 -- # local app=target 00:04:38.933 18:17:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.933 18:17:54 json_config -- json_config/common.sh@35 -- # [[ -n 61169 ]] 00:04:38.933 18:17:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 61169 00:04:38.933 [2024-05-13 18:17:54.771739] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:38.933 18:17:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.933 18:17:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.933 18:17:54 json_config -- json_config/common.sh@41 -- # kill -0 61169 00:04:38.933 18:17:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.499 18:17:55 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.499 18:17:55 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.499 18:17:55 json_config -- json_config/common.sh@41 -- # kill -0 61169 00:04:39.499 18:17:55 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.499 18:17:55 json_config -- json_config/common.sh@43 -- # break 00:04:39.499 18:17:55 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.499 SPDK target shutdown done 00:04:39.499 18:17:55 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.499 INFO: relaunching applications... 00:04:39.499 18:17:55 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:39.499 18:17:55 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:39.499 18:17:55 json_config -- json_config/common.sh@9 -- # local app=target 00:04:39.499 18:17:55 json_config -- json_config/common.sh@10 -- # shift 00:04:39.499 18:17:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.499 18:17:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.499 18:17:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.499 18:17:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.499 18:17:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.499 18:17:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=61444 00:04:39.499 Waiting for target to run... 00:04:39.499 18:17:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.499 18:17:55 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:39.499 18:17:55 json_config -- json_config/common.sh@25 -- # waitforlisten 61444 /var/tmp/spdk_tgt.sock 00:04:39.499 18:17:55 json_config -- common/autotest_common.sh@827 -- # '[' -z 61444 ']' 00:04:39.499 18:17:55 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.499 18:17:55 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.499 18:17:55 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.499 18:17:55 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.499 18:17:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.499 [2024-05-13 18:17:55.345953] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:39.500 [2024-05-13 18:17:55.346056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61444 ] 00:04:40.065 [2024-05-13 18:17:55.761470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.065 [2024-05-13 18:17:55.867323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.323 [2024-05-13 18:17:56.179552] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:40.323 [2024-05-13 18:17:56.211458] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:04:40.323 [2024-05-13 18:17:56.211692] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:40.581 18:17:56 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:40.581 18:17:56 json_config -- common/autotest_common.sh@860 -- # return 0 00:04:40.581 00:04:40.581 18:17:56 json_config -- json_config/common.sh@26 -- # echo '' 00:04:40.581 18:17:56 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:40.581 INFO: Checking if target configuration is the same... 00:04:40.581 18:17:56 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:40.581 18:17:56 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.581 18:17:56 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:40.581 18:17:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:40.581 + '[' 2 -ne 2 ']' 00:04:40.581 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:40.581 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:40.581 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:40.581 +++ basename /dev/fd/62 00:04:40.581 ++ mktemp /tmp/62.XXX 00:04:40.581 + tmp_file_1=/tmp/62.zRv 00:04:40.581 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:40.581 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:40.581 + tmp_file_2=/tmp/spdk_tgt_config.json.wCA 00:04:40.581 + ret=0 00:04:40.581 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:40.838 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.096 + diff -u /tmp/62.zRv /tmp/spdk_tgt_config.json.wCA 00:04:41.096 INFO: JSON config files are the same 00:04:41.096 + echo 'INFO: JSON config files are the same' 00:04:41.096 + rm /tmp/62.zRv /tmp/spdk_tgt_config.json.wCA 00:04:41.096 + exit 0 00:04:41.096 18:17:56 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:41.096 INFO: changing configuration and checking if this can be detected... 00:04:41.096 18:17:56 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:41.096 18:17:56 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:41.096 18:17:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:41.354 18:17:57 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.354 18:17:57 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:41.354 18:17:57 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.354 + '[' 2 -ne 2 ']' 00:04:41.354 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:41.354 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:41.354 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:41.354 +++ basename /dev/fd/62 00:04:41.354 ++ mktemp /tmp/62.XXX 00:04:41.354 + tmp_file_1=/tmp/62.UY8 00:04:41.354 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:41.354 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:41.354 + tmp_file_2=/tmp/spdk_tgt_config.json.5oA 00:04:41.354 + ret=0 00:04:41.354 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.612 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:41.612 + diff -u /tmp/62.UY8 /tmp/spdk_tgt_config.json.5oA 00:04:41.612 + ret=1 00:04:41.612 + echo '=== Start of file: /tmp/62.UY8 ===' 00:04:41.612 + cat /tmp/62.UY8 00:04:41.612 + echo '=== End of file: /tmp/62.UY8 ===' 00:04:41.612 + echo '' 00:04:41.612 + echo '=== Start of file: /tmp/spdk_tgt_config.json.5oA ===' 00:04:41.612 + cat /tmp/spdk_tgt_config.json.5oA 00:04:41.612 + echo '=== End of file: /tmp/spdk_tgt_config.json.5oA ===' 00:04:41.612 + echo '' 00:04:41.612 + rm /tmp/62.UY8 /tmp/spdk_tgt_config.json.5oA 00:04:41.612 + exit 1 00:04:41.612 INFO: configuration change detected. 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:41.612 18:17:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:41.612 18:17:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@317 -- # [[ -n 61444 ]] 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:41.612 18:17:57 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:41.612 18:17:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:41.612 18:17:57 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:41.612 18:17:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:41.612 18:17:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:41.879 18:17:57 json_config -- json_config/json_config.sh@323 -- # killprocess 61444 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@946 -- # '[' -z 61444 ']' 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@950 -- # kill -0 61444 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@951 -- # uname 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61444 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:41.879 killing process with pid 61444 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61444' 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@965 -- # kill 61444 00:04:41.879 [2024-05-13 18:17:57.589882] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:04:41.879 18:17:57 json_config -- common/autotest_common.sh@970 -- # wait 61444 00:04:42.141 18:17:57 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:42.141 18:17:57 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:42.141 18:17:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:42.141 18:17:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.141 18:17:57 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:42.141 INFO: Success 00:04:42.141 18:17:57 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:42.141 00:04:42.141 real 0m8.410s 00:04:42.141 user 0m11.995s 00:04:42.141 sys 0m1.823s 00:04:42.141 18:17:57 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:42.141 ************************************ 00:04:42.141 18:17:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:42.141 END TEST json_config 00:04:42.141 ************************************ 00:04:42.141 18:17:57 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:42.141 18:17:57 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:42.141 18:17:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:42.141 18:17:57 -- common/autotest_common.sh@10 -- # set +x 00:04:42.141 ************************************ 00:04:42.141 START TEST json_config_extra_key 00:04:42.141 ************************************ 00:04:42.141 18:17:57 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:42.141 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.141 18:17:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:42.142 18:17:58 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.142 18:17:58 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.142 18:17:58 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.142 18:17:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.142 18:17:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.142 18:17:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.142 18:17:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:42.142 18:17:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:42.142 18:17:58 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:42.142 INFO: launching applications... 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:42.142 18:17:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=61614 00:04:42.142 Waiting for target to run... 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 61614 /var/tmp/spdk_tgt.sock 00:04:42.142 18:17:58 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 61614 ']' 00:04:42.142 18:17:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:42.142 18:17:58 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:42.142 18:17:58 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:42.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:42.142 18:17:58 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:42.142 18:17:58 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:42.142 18:17:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:42.400 [2024-05-13 18:17:58.126773] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:42.400 [2024-05-13 18:17:58.126888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61614 ] 00:04:42.967 [2024-05-13 18:17:58.672803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.967 [2024-05-13 18:17:58.772725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.225 18:17:59 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:43.226 18:17:59 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:04:43.226 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:43.226 18:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:43.226 INFO: shutting down applications... 00:04:43.226 18:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 61614 ]] 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 61614 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61614 00:04:43.226 18:17:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.792 18:17:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.792 18:17:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.792 18:17:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 61614 00:04:43.792 18:17:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:43.792 18:17:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:43.792 18:17:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:43.792 SPDK target shutdown done 00:04:43.792 18:17:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:43.792 Success 00:04:43.792 18:17:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:43.792 00:04:43.792 real 0m1.627s 00:04:43.792 user 0m1.412s 00:04:43.792 sys 0m0.575s 00:04:43.792 18:17:59 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.792 18:17:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:43.792 ************************************ 00:04:43.792 END TEST json_config_extra_key 00:04:43.792 ************************************ 00:04:43.792 18:17:59 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.792 18:17:59 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.792 18:17:59 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.792 18:17:59 -- common/autotest_common.sh@10 -- # set +x 00:04:43.792 ************************************ 00:04:43.792 START TEST alias_rpc 00:04:43.792 ************************************ 00:04:43.792 18:17:59 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:43.792 * Looking for test storage... 00:04:43.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:43.792 18:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:44.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.050 18:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61696 00:04:44.050 18:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61696 00:04:44.050 18:17:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:44.050 18:17:59 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 61696 ']' 00:04:44.050 18:17:59 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.050 18:17:59 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.050 18:17:59 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.050 18:17:59 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.050 18:17:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.050 [2024-05-13 18:17:59.836806] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:44.050 [2024-05-13 18:17:59.837242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61696 ] 00:04:44.050 [2024-05-13 18:17:59.984026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.307 [2024-05-13 18:18:00.120160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.239 18:18:00 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.239 18:18:00 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:45.239 18:18:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:45.239 18:18:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61696 00:04:45.239 18:18:01 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 61696 ']' 00:04:45.239 18:18:01 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 61696 00:04:45.239 18:18:01 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:04:45.239 18:18:01 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.239 18:18:01 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61696 00:04:45.497 killing process with pid 61696 00:04:45.497 18:18:01 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:45.497 18:18:01 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:45.497 18:18:01 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61696' 00:04:45.497 18:18:01 alias_rpc -- common/autotest_common.sh@965 -- # kill 61696 00:04:45.497 18:18:01 alias_rpc -- common/autotest_common.sh@970 -- # wait 61696 00:04:45.754 ************************************ 00:04:45.754 END TEST alias_rpc 00:04:45.754 ************************************ 00:04:45.754 00:04:45.754 real 0m1.987s 00:04:45.754 user 0m2.307s 00:04:45.754 sys 0m0.486s 00:04:45.754 18:18:01 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.754 18:18:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.754 18:18:01 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:04:45.754 18:18:01 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.754 18:18:01 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:45.754 18:18:01 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:45.754 18:18:01 -- common/autotest_common.sh@10 -- # set +x 00:04:45.754 ************************************ 00:04:45.754 START TEST dpdk_mem_utility 00:04:45.754 ************************************ 00:04:45.754 18:18:01 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:46.011 * Looking for test storage... 00:04:46.011 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:46.011 18:18:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.011 18:18:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61788 00:04:46.011 18:18:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61788 00:04:46.011 18:18:01 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:46.011 18:18:01 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 61788 ']' 00:04:46.011 18:18:01 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.011 18:18:01 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:46.011 18:18:01 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.011 18:18:01 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:46.011 18:18:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.011 [2024-05-13 18:18:01.845682] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:46.012 [2024-05-13 18:18:01.846032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61788 ] 00:04:46.269 [2024-05-13 18:18:01.983608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.269 [2024-05-13 18:18:02.131859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.202 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:47.202 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:47.202 18:18:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:47.202 18:18:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:47.202 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:47.202 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.202 { 00:04:47.202 "filename": "/tmp/spdk_mem_dump.txt" 00:04:47.202 } 00:04:47.202 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:47.202 18:18:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:47.202 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:47.202 1 heaps totaling size 814.000000 MiB 00:04:47.202 size: 814.000000 MiB heap id: 0 00:04:47.202 end heaps---------- 00:04:47.202 8 mempools totaling size 598.116089 MiB 00:04:47.202 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:47.202 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:47.202 size: 84.521057 MiB name: bdev_io_61788 00:04:47.202 size: 51.011292 MiB name: evtpool_61788 00:04:47.202 size: 50.003479 MiB name: msgpool_61788 00:04:47.202 size: 21.763794 MiB name: PDU_Pool 00:04:47.202 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:47.202 size: 0.026123 MiB name: Session_Pool 00:04:47.202 end mempools------- 00:04:47.202 6 memzones totaling size 4.142822 MiB 00:04:47.202 size: 1.000366 MiB name: RG_ring_0_61788 00:04:47.202 size: 1.000366 MiB name: RG_ring_1_61788 00:04:47.202 size: 1.000366 MiB name: RG_ring_4_61788 00:04:47.202 size: 1.000366 MiB name: RG_ring_5_61788 00:04:47.202 size: 0.125366 MiB name: RG_ring_2_61788 00:04:47.202 size: 0.015991 MiB name: RG_ring_3_61788 00:04:47.202 end memzones------- 00:04:47.202 18:18:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:47.202 heap id: 0 total size: 814.000000 MiB number of busy elements: 228 number of free elements: 15 00:04:47.202 list of free elements. size: 12.485107 MiB 00:04:47.202 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:47.202 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:47.202 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:47.202 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:47.202 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:47.202 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:47.202 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:47.202 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:47.202 element at address: 0x200000200000 with size: 0.836853 MiB 00:04:47.202 element at address: 0x20001aa00000 with size: 0.571533 MiB 00:04:47.202 element at address: 0x20000b200000 with size: 0.489441 MiB 00:04:47.202 element at address: 0x200000800000 with size: 0.486877 MiB 00:04:47.202 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:47.202 element at address: 0x200027e00000 with size: 0.398132 MiB 00:04:47.202 element at address: 0x200003a00000 with size: 0.351501 MiB 00:04:47.202 list of standard malloc elements. size: 199.252319 MiB 00:04:47.202 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:47.202 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:47.202 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:47.202 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:47.202 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:47.202 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:47.202 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:47.202 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:47.202 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:47.202 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:04:47.202 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:04:47.202 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:04:47.202 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:47.203 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:04:47.203 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:47.204 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:47.204 list of memzone associated elements. size: 602.262573 MiB 00:04:47.204 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:47.204 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:47.204 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:47.204 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:47.204 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:47.204 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61788_0 00:04:47.204 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:47.204 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61788_0 00:04:47.204 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:47.204 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61788_0 00:04:47.204 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:47.204 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:47.204 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:47.204 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:47.204 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:47.204 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61788 00:04:47.204 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:47.204 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61788 00:04:47.204 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:47.204 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61788 00:04:47.204 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:47.204 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:47.204 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:47.204 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:47.204 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:47.204 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:47.204 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:47.204 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:47.204 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:47.204 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61788 00:04:47.204 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:47.204 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61788 00:04:47.204 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:47.204 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61788 00:04:47.204 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:47.204 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61788 00:04:47.204 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:47.204 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61788 00:04:47.204 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:47.204 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:47.204 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:47.204 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:47.204 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:47.204 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:47.204 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:47.204 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61788 00:04:47.204 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:47.204 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:47.204 element at address: 0x200027e66040 with size: 0.023743 MiB 00:04:47.204 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:47.204 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:47.204 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61788 00:04:47.204 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:04:47.204 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:47.204 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:04:47.204 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61788 00:04:47.204 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:47.204 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61788 00:04:47.204 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:04:47.204 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:47.204 18:18:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:47.204 18:18:02 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61788 00:04:47.204 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 61788 ']' 00:04:47.204 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 61788 00:04:47.204 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:47.204 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:47.204 18:18:02 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 61788 00:04:47.204 killing process with pid 61788 00:04:47.204 18:18:03 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:47.204 18:18:03 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:47.204 18:18:03 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 61788' 00:04:47.204 18:18:03 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 61788 00:04:47.204 18:18:03 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 61788 00:04:47.768 00:04:47.768 real 0m1.744s 00:04:47.768 user 0m1.849s 00:04:47.768 sys 0m0.490s 00:04:47.768 18:18:03 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.768 ************************************ 00:04:47.768 END TEST dpdk_mem_utility 00:04:47.768 ************************************ 00:04:47.768 18:18:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:47.768 18:18:03 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:47.768 18:18:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:47.768 18:18:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.768 18:18:03 -- common/autotest_common.sh@10 -- # set +x 00:04:47.768 ************************************ 00:04:47.768 START TEST event 00:04:47.768 ************************************ 00:04:47.768 18:18:03 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:47.768 * Looking for test storage... 00:04:47.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:47.768 18:18:03 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:47.768 18:18:03 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:47.768 18:18:03 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.768 18:18:03 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:47.768 18:18:03 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.768 18:18:03 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.768 ************************************ 00:04:47.768 START TEST event_perf 00:04:47.768 ************************************ 00:04:47.768 18:18:03 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:47.768 Running I/O for 1 seconds...[2024-05-13 18:18:03.595739] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:47.768 [2024-05-13 18:18:03.595832] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61878 ] 00:04:48.024 [2024-05-13 18:18:03.733636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.024 [2024-05-13 18:18:03.853792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.024 Running I/O for 1 seconds...[2024-05-13 18:18:03.854081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.024 [2024-05-13 18:18:03.853947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.024 [2024-05-13 18:18:03.854076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.411 00:04:49.411 lcore 0: 185763 00:04:49.411 lcore 1: 185763 00:04:49.411 lcore 2: 185763 00:04:49.411 lcore 3: 185763 00:04:49.411 done. 00:04:49.411 00:04:49.411 real 0m1.389s 00:04:49.411 user 0m4.190s 00:04:49.411 sys 0m0.069s 00:04:49.411 18:18:04 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.411 18:18:04 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.411 ************************************ 00:04:49.411 END TEST event_perf 00:04:49.411 ************************************ 00:04:49.411 18:18:05 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.411 18:18:05 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:49.411 18:18:05 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.411 18:18:05 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.411 ************************************ 00:04:49.411 START TEST event_reactor 00:04:49.411 ************************************ 00:04:49.411 18:18:05 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:49.411 [2024-05-13 18:18:05.035363] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:49.411 [2024-05-13 18:18:05.035446] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61916 ] 00:04:49.411 [2024-05-13 18:18:05.168364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.411 [2024-05-13 18:18:05.271841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.853 test_start 00:04:50.853 oneshot 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 250 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 250 00:04:50.853 tick 500 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 tick 250 00:04:50.853 tick 100 00:04:50.853 tick 100 00:04:50.853 test_end 00:04:50.853 00:04:50.853 real 0m1.359s 00:04:50.853 user 0m1.204s 00:04:50.853 sys 0m0.049s 00:04:50.853 18:18:06 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.853 ************************************ 00:04:50.853 18:18:06 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:50.853 END TEST event_reactor 00:04:50.853 ************************************ 00:04:50.853 18:18:06 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.853 18:18:06 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:50.853 18:18:06 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.853 18:18:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.853 ************************************ 00:04:50.853 START TEST event_reactor_perf 00:04:50.853 ************************************ 00:04:50.853 18:18:06 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:50.853 [2024-05-13 18:18:06.443776] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:50.853 [2024-05-13 18:18:06.443871] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61952 ] 00:04:50.853 [2024-05-13 18:18:06.582807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.853 [2024-05-13 18:18:06.693869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.231 test_start 00:04:52.231 test_end 00:04:52.231 Performance: 366525 events per second 00:04:52.231 00:04:52.231 real 0m1.377s 00:04:52.231 user 0m1.220s 00:04:52.231 sys 0m0.052s 00:04:52.231 18:18:07 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:52.231 18:18:07 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.231 ************************************ 00:04:52.231 END TEST event_reactor_perf 00:04:52.231 ************************************ 00:04:52.231 18:18:07 event -- event/event.sh@49 -- # uname -s 00:04:52.231 18:18:07 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.231 18:18:07 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.231 18:18:07 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:52.231 18:18:07 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:52.231 18:18:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.231 ************************************ 00:04:52.231 START TEST event_scheduler 00:04:52.231 ************************************ 00:04:52.231 18:18:07 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.231 * Looking for test storage... 00:04:52.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.231 18:18:07 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.231 18:18:07 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=62013 00:04:52.231 18:18:07 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.232 18:18:07 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.232 18:18:07 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 62013 00:04:52.232 18:18:07 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 62013 ']' 00:04:52.232 18:18:07 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.232 18:18:07 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:52.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.232 18:18:07 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.232 18:18:07 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:52.232 18:18:07 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.232 [2024-05-13 18:18:07.991726] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:52.232 [2024-05-13 18:18:07.992527] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62013 ] 00:04:52.232 [2024-05-13 18:18:08.133723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:52.490 [2024-05-13 18:18:08.270620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.490 [2024-05-13 18:18:08.270684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:52.490 [2024-05-13 18:18:08.270840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:52.490 [2024-05-13 18:18:08.270856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.423 18:18:09 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:53.423 18:18:09 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:53.424 18:18:09 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 POWER: Env isn't set yet! 00:04:53.424 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:53.424 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.424 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.424 POWER: Attempting to initialise PSTAT power management... 00:04:53.424 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.424 POWER: Cannot set governor of lcore 0 to performance 00:04:53.424 POWER: Attempting to initialise AMD PSTATE power management... 00:04:53.424 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.424 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.424 POWER: Attempting to initialise CPPC power management... 00:04:53.424 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:53.424 POWER: Cannot set governor of lcore 0 to userspace 00:04:53.424 POWER: Attempting to initialise VM power management... 00:04:53.424 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:53.424 POWER: Unable to set Power Management Environment for lcore 0 00:04:53.424 [2024-05-13 18:18:09.012686] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:04:53.424 [2024-05-13 18:18:09.012700] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:04:53.424 [2024-05-13 18:18:09.012708] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 [2024-05-13 18:18:09.105905] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 ************************************ 00:04:53.424 START TEST scheduler_create_thread 00:04:53.424 ************************************ 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 2 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 3 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 4 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 5 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 6 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 7 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 8 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 9 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 10 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:53.424 18:18:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.797 18:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:54.797 18:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:54.797 18:18:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:54.797 18:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:54.797 18:18:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.169 ************************************ 00:04:56.169 END TEST scheduler_create_thread 00:04:56.169 ************************************ 00:04:56.169 18:18:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.169 00:04:56.169 real 0m2.615s 00:04:56.169 user 0m0.014s 00:04:56.169 sys 0m0.006s 00:04:56.169 18:18:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.169 18:18:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:56.169 18:18:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:56.169 18:18:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 62013 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 62013 ']' 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 62013 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62013 00:04:56.169 killing process with pid 62013 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62013' 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 62013 00:04:56.169 18:18:11 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 62013 00:04:56.427 [2024-05-13 18:18:12.210236] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:56.685 00:04:56.685 real 0m4.612s 00:04:56.685 user 0m8.669s 00:04:56.685 sys 0m0.385s 00:04:56.685 18:18:12 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.685 ************************************ 00:04:56.685 END TEST event_scheduler 00:04:56.685 ************************************ 00:04:56.685 18:18:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.685 18:18:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:56.685 18:18:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:56.685 18:18:12 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:56.685 18:18:12 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.685 18:18:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.685 ************************************ 00:04:56.685 START TEST app_repeat 00:04:56.685 ************************************ 00:04:56.685 18:18:12 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:56.685 Process app_repeat pid: 62132 00:04:56.685 spdk_app_start Round 0 00:04:56.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=62132 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 62132' 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:56.685 18:18:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62132 /var/tmp/spdk-nbd.sock 00:04:56.685 18:18:12 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62132 ']' 00:04:56.685 18:18:12 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.685 18:18:12 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:56.685 18:18:12 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.685 18:18:12 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:56.685 18:18:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.685 [2024-05-13 18:18:12.551151] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:04:56.685 [2024-05-13 18:18:12.551228] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62132 ] 00:04:56.943 [2024-05-13 18:18:12.684009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.943 [2024-05-13 18:18:12.797941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.943 [2024-05-13 18:18:12.797950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.876 18:18:13 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:57.876 18:18:13 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:57.876 18:18:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.876 Malloc0 00:04:58.134 18:18:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:58.134 Malloc1 00:04:58.392 18:18:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.392 18:18:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.651 /dev/nbd0 00:04:58.651 18:18:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.651 18:18:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.651 1+0 records in 00:04:58.651 1+0 records out 00:04:58.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294536 s, 13.9 MB/s 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:58.651 18:18:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:58.651 18:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.651 18:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.651 18:18:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.910 /dev/nbd1 00:04:58.910 18:18:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.910 18:18:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.910 1+0 records in 00:04:58.910 1+0 records out 00:04:58.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000491393 s, 8.3 MB/s 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:58.910 18:18:14 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:58.910 18:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.910 18:18:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.910 18:18:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.910 18:18:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.910 18:18:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:59.168 { 00:04:59.168 "bdev_name": "Malloc0", 00:04:59.168 "nbd_device": "/dev/nbd0" 00:04:59.168 }, 00:04:59.168 { 00:04:59.168 "bdev_name": "Malloc1", 00:04:59.168 "nbd_device": "/dev/nbd1" 00:04:59.168 } 00:04:59.168 ]' 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:59.168 { 00:04:59.168 "bdev_name": "Malloc0", 00:04:59.168 "nbd_device": "/dev/nbd0" 00:04:59.168 }, 00:04:59.168 { 00:04:59.168 "bdev_name": "Malloc1", 00:04:59.168 "nbd_device": "/dev/nbd1" 00:04:59.168 } 00:04:59.168 ]' 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:59.168 /dev/nbd1' 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:59.168 /dev/nbd1' 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:59.168 256+0 records in 00:04:59.168 256+0 records out 00:04:59.168 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0073495 s, 143 MB/s 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.168 18:18:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:59.426 256+0 records in 00:04:59.426 256+0 records out 00:04:59.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330676 s, 31.7 MB/s 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:59.426 256+0 records in 00:04:59.426 256+0 records out 00:04:59.426 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0387052 s, 27.1 MB/s 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.426 18:18:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.683 18:18:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.940 18:18:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:00.198 18:18:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:00.198 18:18:16 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.764 18:18:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.764 [2024-05-13 18:18:16.614436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.022 [2024-05-13 18:18:16.718197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.022 [2024-05-13 18:18:16.718205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.022 [2024-05-13 18:18:16.772311] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:01.022 [2024-05-13 18:18:16.772377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.555 18:18:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:03.555 spdk_app_start Round 1 00:05:03.555 18:18:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:03.555 18:18:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62132 /var/tmp/spdk-nbd.sock 00:05:03.555 18:18:19 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62132 ']' 00:05:03.555 18:18:19 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.555 18:18:19 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:03.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.555 18:18:19 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.555 18:18:19 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:03.555 18:18:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.814 18:18:19 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:03.814 18:18:19 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:03.814 18:18:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.072 Malloc0 00:05:04.072 18:18:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.330 Malloc1 00:05:04.588 18:18:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.588 18:18:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.846 /dev/nbd0 00:05:04.846 18:18:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.846 18:18:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.846 1+0 records in 00:05:04.846 1+0 records out 00:05:04.846 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239471 s, 17.1 MB/s 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:04.846 18:18:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:04.846 18:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.846 18:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.846 18:18:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.104 /dev/nbd1 00:05:05.104 18:18:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.104 18:18:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.104 1+0 records in 00:05:05.104 1+0 records out 00:05:05.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361595 s, 11.3 MB/s 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:05.104 18:18:20 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:05.104 18:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.104 18:18:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.104 18:18:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.104 18:18:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.104 18:18:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.360 { 00:05:05.360 "bdev_name": "Malloc0", 00:05:05.360 "nbd_device": "/dev/nbd0" 00:05:05.360 }, 00:05:05.360 { 00:05:05.360 "bdev_name": "Malloc1", 00:05:05.360 "nbd_device": "/dev/nbd1" 00:05:05.360 } 00:05:05.360 ]' 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.360 { 00:05:05.360 "bdev_name": "Malloc0", 00:05:05.360 "nbd_device": "/dev/nbd0" 00:05:05.360 }, 00:05:05.360 { 00:05:05.360 "bdev_name": "Malloc1", 00:05:05.360 "nbd_device": "/dev/nbd1" 00:05:05.360 } 00:05:05.360 ]' 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.360 /dev/nbd1' 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.360 /dev/nbd1' 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.360 18:18:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.361 18:18:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.361 18:18:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.361 18:18:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.361 256+0 records in 00:05:05.361 256+0 records out 00:05:05.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105018 s, 99.8 MB/s 00:05:05.361 18:18:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.361 18:18:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.361 256+0 records in 00:05:05.361 256+0 records out 00:05:05.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0319379 s, 32.8 MB/s 00:05:05.361 18:18:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.361 18:18:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.617 256+0 records in 00:05:05.617 256+0 records out 00:05:05.617 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0392525 s, 26.7 MB/s 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.617 18:18:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.874 18:18:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.131 18:18:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:06.389 18:18:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:06.389 18:18:22 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.647 18:18:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:06.905 [2024-05-13 18:18:22.786955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:07.164 [2024-05-13 18:18:22.899093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:07.164 [2024-05-13 18:18:22.899105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.164 [2024-05-13 18:18:22.954665] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:07.164 [2024-05-13 18:18:22.954737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.689 18:18:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.689 spdk_app_start Round 2 00:05:09.689 18:18:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:09.689 18:18:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 62132 /var/tmp/spdk-nbd.sock 00:05:09.689 18:18:25 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62132 ']' 00:05:09.689 18:18:25 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.689 18:18:25 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:09.689 18:18:25 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.689 18:18:25 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:09.689 18:18:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.946 18:18:25 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:09.946 18:18:25 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:09.946 18:18:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.203 Malloc0 00:05:10.203 18:18:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.460 Malloc1 00:05:10.460 18:18:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.460 18:18:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.783 /dev/nbd0 00:05:10.783 18:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.783 18:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.783 1+0 records in 00:05:10.783 1+0 records out 00:05:10.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246239 s, 16.6 MB/s 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:10.783 18:18:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:10.783 18:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.783 18:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.783 18:18:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.042 /dev/nbd1 00:05:11.042 18:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.301 18:18:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.301 1+0 records in 00:05:11.301 1+0 records out 00:05:11.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280364 s, 14.6 MB/s 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:05:11.301 18:18:26 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:05:11.301 18:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.301 18:18:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.301 18:18:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.301 18:18:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.301 18:18:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.561 { 00:05:11.561 "bdev_name": "Malloc0", 00:05:11.561 "nbd_device": "/dev/nbd0" 00:05:11.561 }, 00:05:11.561 { 00:05:11.561 "bdev_name": "Malloc1", 00:05:11.561 "nbd_device": "/dev/nbd1" 00:05:11.561 } 00:05:11.561 ]' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.561 { 00:05:11.561 "bdev_name": "Malloc0", 00:05:11.561 "nbd_device": "/dev/nbd0" 00:05:11.561 }, 00:05:11.561 { 00:05:11.561 "bdev_name": "Malloc1", 00:05:11.561 "nbd_device": "/dev/nbd1" 00:05:11.561 } 00:05:11.561 ]' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.561 /dev/nbd1' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.561 /dev/nbd1' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.561 256+0 records in 00:05:11.561 256+0 records out 00:05:11.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00868615 s, 121 MB/s 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.561 256+0 records in 00:05:11.561 256+0 records out 00:05:11.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025033 s, 41.9 MB/s 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.561 256+0 records in 00:05:11.561 256+0 records out 00:05:11.561 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271966 s, 38.6 MB/s 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.561 18:18:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.818 18:18:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.818 18:18:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.818 18:18:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.818 18:18:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.818 18:18:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.819 18:18:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.819 18:18:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.819 18:18:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.819 18:18:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.819 18:18:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.384 18:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.641 18:18:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.641 18:18:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.899 18:18:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.158 [2024-05-13 18:18:28.931689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.158 [2024-05-13 18:18:29.036978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.158 [2024-05-13 18:18:29.036988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.158 [2024-05-13 18:18:29.091306] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.158 [2024-05-13 18:18:29.091366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:16.440 18:18:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 62132 /var/tmp/spdk-nbd.sock 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 62132 ']' 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:05:16.440 18:18:31 event.app_repeat -- event/event.sh@39 -- # killprocess 62132 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 62132 ']' 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 62132 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62132 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:16.440 killing process with pid 62132 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62132' 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@965 -- # kill 62132 00:05:16.440 18:18:31 event.app_repeat -- common/autotest_common.sh@970 -- # wait 62132 00:05:16.440 spdk_app_start is called in Round 0. 00:05:16.440 Shutdown signal received, stop current app iteration 00:05:16.440 Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 reinitialization... 00:05:16.440 spdk_app_start is called in Round 1. 00:05:16.440 Shutdown signal received, stop current app iteration 00:05:16.440 Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 reinitialization... 00:05:16.440 spdk_app_start is called in Round 2. 00:05:16.440 Shutdown signal received, stop current app iteration 00:05:16.440 Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 reinitialization... 00:05:16.440 spdk_app_start is called in Round 3. 00:05:16.440 Shutdown signal received, stop current app iteration 00:05:16.440 18:18:32 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:16.440 18:18:32 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:16.440 00:05:16.440 real 0m19.695s 00:05:16.440 user 0m44.185s 00:05:16.440 sys 0m3.205s 00:05:16.440 18:18:32 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.440 ************************************ 00:05:16.440 END TEST app_repeat 00:05:16.440 ************************************ 00:05:16.440 18:18:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.440 18:18:32 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:16.440 18:18:32 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.440 18:18:32 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.440 18:18:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.440 18:18:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.440 ************************************ 00:05:16.440 START TEST cpu_locks 00:05:16.440 ************************************ 00:05:16.440 18:18:32 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:16.440 * Looking for test storage... 00:05:16.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:16.440 18:18:32 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:16.440 18:18:32 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:16.441 18:18:32 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:16.441 18:18:32 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:16.441 18:18:32 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:16.441 18:18:32 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.441 18:18:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.441 ************************************ 00:05:16.441 START TEST default_locks 00:05:16.441 ************************************ 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62763 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 62763 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62763 ']' 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:16.441 18:18:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.698 [2024-05-13 18:18:32.423481] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:16.698 [2024-05-13 18:18:32.423627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62763 ] 00:05:16.698 [2024-05-13 18:18:32.562342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.956 [2024-05-13 18:18:32.683195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.521 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:17.521 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:05:17.521 18:18:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 62763 00:05:17.521 18:18:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.521 18:18:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 62763 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 62763 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 62763 ']' 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 62763 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62763 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:18.084 killing process with pid 62763 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62763' 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 62763 00:05:18.084 18:18:33 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 62763 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62763 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 62763 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 62763 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 62763 ']' 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.648 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.649 ERROR: process (pid: 62763) is no longer running 00:05:18.649 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (62763) - No such process 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:18.649 00:05:18.649 real 0m2.004s 00:05:18.649 user 0m2.191s 00:05:18.649 sys 0m0.584s 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.649 ************************************ 00:05:18.649 END TEST default_locks 00:05:18.649 ************************************ 00:05:18.649 18:18:34 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.649 18:18:34 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:18.649 18:18:34 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:18.649 18:18:34 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.649 18:18:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.649 ************************************ 00:05:18.649 START TEST default_locks_via_rpc 00:05:18.649 ************************************ 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62827 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 62827 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 62827 ']' 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:18.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:18.649 18:18:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.649 [2024-05-13 18:18:34.480242] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:18.649 [2024-05-13 18:18:34.480339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62827 ] 00:05:18.906 [2024-05-13 18:18:34.613069] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.906 [2024-05-13 18:18:34.729506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 62827 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.841 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 62827 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 62827 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 62827 ']' 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 62827 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62827 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:20.099 killing process with pid 62827 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62827' 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 62827 00:05:20.099 18:18:35 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 62827 00:05:20.357 00:05:20.357 real 0m1.879s 00:05:20.357 user 0m2.029s 00:05:20.357 sys 0m0.546s 00:05:20.357 18:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.357 18:18:36 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.357 ************************************ 00:05:20.357 END TEST default_locks_via_rpc 00:05:20.357 ************************************ 00:05:20.615 18:18:36 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.615 18:18:36 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.615 18:18:36 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.615 18:18:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.615 ************************************ 00:05:20.615 START TEST non_locking_app_on_locked_coremask 00:05:20.615 ************************************ 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62896 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 62896 /var/tmp/spdk.sock 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62896 ']' 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:20.615 18:18:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.615 [2024-05-13 18:18:36.392990] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:20.615 [2024-05-13 18:18:36.393072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62896 ] 00:05:20.615 [2024-05-13 18:18:36.527557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.873 [2024-05-13 18:18:36.638517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62924 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 62924 /var/tmp/spdk2.sock 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 62924 ']' 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:21.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:21.805 18:18:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.805 [2024-05-13 18:18:37.497729] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:21.805 [2024-05-13 18:18:37.497843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62924 ] 00:05:21.805 [2024-05-13 18:18:37.643845] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.805 [2024-05-13 18:18:37.643892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.062 [2024-05-13 18:18:37.869411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.628 18:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:22.628 18:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:22.628 18:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 62896 00:05:22.628 18:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 62896 00:05:22.628 18:18:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 62896 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62896 ']' 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62896 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62896 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:23.562 killing process with pid 62896 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62896' 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62896 00:05:23.562 18:18:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62896 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 62924 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 62924 ']' 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 62924 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 62924 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.171 killing process with pid 62924 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 62924' 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 62924 00:05:24.171 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 62924 00:05:24.735 00:05:24.735 real 0m4.161s 00:05:24.735 user 0m4.648s 00:05:24.735 sys 0m1.125s 00:05:24.735 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.735 18:18:40 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.735 ************************************ 00:05:24.735 END TEST non_locking_app_on_locked_coremask 00:05:24.735 ************************************ 00:05:24.735 18:18:40 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:24.735 18:18:40 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.735 18:18:40 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.735 18:18:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:24.735 ************************************ 00:05:24.735 START TEST locking_app_on_unlocked_coremask 00:05:24.735 ************************************ 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=63003 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 63003 /var/tmp/spdk.sock 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63003 ']' 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:24.735 18:18:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.735 [2024-05-13 18:18:40.600818] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:24.735 [2024-05-13 18:18:40.600901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63003 ] 00:05:24.993 [2024-05-13 18:18:40.731742] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.993 [2024-05-13 18:18:40.731785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.993 [2024-05-13 18:18:40.838550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=63031 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 63031 /var/tmp/spdk2.sock 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63031 ']' 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.928 18:18:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.928 [2024-05-13 18:18:41.689633] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:25.928 [2024-05-13 18:18:41.689738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63031 ] 00:05:25.928 [2024-05-13 18:18:41.834804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.187 [2024-05-13 18:18:42.070097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.753 18:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:26.753 18:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:26.753 18:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 63031 00:05:26.753 18:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63031 00:05:26.753 18:18:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 63003 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63003 ']' 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 63003 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63003 00:05:27.702 killing process with pid 63003 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63003' 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 63003 00:05:27.702 18:18:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 63003 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 63031 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63031 ']' 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 63031 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63031 00:05:28.633 killing process with pid 63031 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63031' 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 63031 00:05:28.633 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 63031 00:05:28.891 ************************************ 00:05:28.891 END TEST locking_app_on_unlocked_coremask 00:05:28.891 ************************************ 00:05:28.891 00:05:28.891 real 0m4.193s 00:05:28.891 user 0m4.685s 00:05:28.891 sys 0m1.097s 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.891 18:18:44 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:28.891 18:18:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:28.891 18:18:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:28.891 18:18:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.891 ************************************ 00:05:28.891 START TEST locking_app_on_locked_coremask 00:05:28.891 ************************************ 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=63110 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 63110 /var/tmp/spdk.sock 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63110 ']' 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:28.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:28.891 18:18:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.148 [2024-05-13 18:18:44.853015] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:29.148 [2024-05-13 18:18:44.853114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63110 ] 00:05:29.148 [2024-05-13 18:18:44.992041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.406 [2024-05-13 18:18:45.114521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=63138 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 63138 /var/tmp/spdk2.sock 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63138 /var/tmp/spdk2.sock 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63138 /var/tmp/spdk2.sock 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 63138 ']' 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.972 18:18:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.972 [2024-05-13 18:18:45.868636] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:29.972 [2024-05-13 18:18:45.868733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63138 ] 00:05:30.230 [2024-05-13 18:18:46.013820] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 63110 has claimed it. 00:05:30.230 [2024-05-13 18:18:46.013883] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.796 ERROR: process (pid: 63138) is no longer running 00:05:30.796 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (63138) - No such process 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 63110 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 63110 00:05:30.796 18:18:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 63110 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 63110 ']' 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 63110 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63110 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63110' 00:05:31.401 killing process with pid 63110 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 63110 00:05:31.401 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 63110 00:05:31.665 00:05:31.665 real 0m2.729s 00:05:31.665 user 0m3.165s 00:05:31.665 sys 0m0.676s 00:05:31.665 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:31.665 18:18:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.665 ************************************ 00:05:31.665 END TEST locking_app_on_locked_coremask 00:05:31.665 ************************************ 00:05:31.666 18:18:47 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:31.666 18:18:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:31.666 18:18:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:31.666 18:18:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.666 ************************************ 00:05:31.666 START TEST locking_overlapped_coremask 00:05:31.666 ************************************ 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=63190 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 63190 /var/tmp/spdk.sock 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 63190 ']' 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:31.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:31.666 18:18:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.929 [2024-05-13 18:18:47.639810] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:31.929 [2024-05-13 18:18:47.639913] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63190 ] 00:05:31.929 [2024-05-13 18:18:47.779076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.187 [2024-05-13 18:18:47.909927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.187 [2024-05-13 18:18:47.910039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.187 [2024-05-13 18:18:47.910049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=63220 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 63220 /var/tmp/spdk2.sock 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 63220 /var/tmp/spdk2.sock 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 63220 /var/tmp/spdk2.sock 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 63220 ']' 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:32.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:32.754 18:18:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.754 [2024-05-13 18:18:48.665013] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:32.754 [2024-05-13 18:18:48.665594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63220 ] 00:05:33.012 [2024-05-13 18:18:48.811319] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63190 has claimed it. 00:05:33.012 [2024-05-13 18:18:48.811390] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.580 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (63220) - No such process 00:05:33.580 ERROR: process (pid: 63220) is no longer running 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 63190 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 63190 ']' 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 63190 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63190 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:33.580 killing process with pid 63190 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63190' 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 63190 00:05:33.580 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 63190 00:05:34.146 00:05:34.146 real 0m2.280s 00:05:34.146 user 0m6.191s 00:05:34.146 sys 0m0.480s 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.146 ************************************ 00:05:34.146 END TEST locking_overlapped_coremask 00:05:34.146 ************************************ 00:05:34.146 18:18:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:34.146 18:18:49 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:34.146 18:18:49 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:34.146 18:18:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.146 ************************************ 00:05:34.146 START TEST locking_overlapped_coremask_via_rpc 00:05:34.146 ************************************ 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=63271 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 63271 /var/tmp/spdk.sock 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63271 ']' 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.146 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:34.147 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.147 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:34.147 18:18:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.147 [2024-05-13 18:18:49.971451] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:34.147 [2024-05-13 18:18:49.971558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63271 ] 00:05:34.405 [2024-05-13 18:18:50.106323] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.405 [2024-05-13 18:18:50.106373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.405 [2024-05-13 18:18:50.223716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.405 [2024-05-13 18:18:50.223793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.405 [2024-05-13 18:18:50.223796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:35.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=63301 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 63301 /var/tmp/spdk2.sock 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63301 ']' 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:35.337 18:18:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.337 [2024-05-13 18:18:50.996437] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:35.337 [2024-05-13 18:18:50.996525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63301 ] 00:05:35.337 [2024-05-13 18:18:51.137194] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:35.337 [2024-05-13 18:18:51.137297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.594 [2024-05-13 18:18:51.453615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.594 [2024-05-13 18:18:51.456802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.594 [2024-05-13 18:18:51.456809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:36.160 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.418 [2024-05-13 18:18:52.106799] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 63271 has claimed it. 00:05:36.418 2024/05/13 18:18:52 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:05:36.418 request: 00:05:36.418 { 00:05:36.418 "method": "framework_enable_cpumask_locks", 00:05:36.418 "params": {} 00:05:36.418 } 00:05:36.418 Got JSON-RPC error response 00:05:36.418 GoRPCClient: error on JSON-RPC call 00:05:36.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 63271 /var/tmp/spdk.sock 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63271 ']' 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.418 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 63301 /var/tmp/spdk2.sock 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 63301 ']' 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:36.676 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.934 ************************************ 00:05:36.934 END TEST locking_overlapped_coremask_via_rpc 00:05:36.934 ************************************ 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.934 00:05:36.934 real 0m2.794s 00:05:36.934 user 0m1.420s 00:05:36.934 sys 0m0.247s 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:36.934 18:18:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.934 18:18:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:36.934 18:18:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63271 ]] 00:05:36.934 18:18:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63271 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63271 ']' 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63271 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63271 00:05:36.934 killing process with pid 63271 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63271' 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 63271 00:05:36.934 18:18:52 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 63271 00:05:37.500 18:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63301 ]] 00:05:37.500 18:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63301 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63301 ']' 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63301 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63301 00:05:37.500 killing process with pid 63301 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63301' 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 63301 00:05:37.500 18:18:53 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 63301 00:05:38.065 18:18:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.065 18:18:53 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:38.065 18:18:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 63271 ]] 00:05:38.065 18:18:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 63271 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63271 ']' 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63271 00:05:38.065 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (63271) - No such process 00:05:38.065 Process with pid 63271 is not found 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 63271 is not found' 00:05:38.065 18:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 63301 ]] 00:05:38.065 18:18:53 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 63301 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 63301 ']' 00:05:38.065 Process with pid 63301 is not found 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 63301 00:05:38.065 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (63301) - No such process 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 63301 is not found' 00:05:38.065 18:18:53 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:38.065 00:05:38.065 real 0m21.622s 00:05:38.065 user 0m38.008s 00:05:38.065 sys 0m5.708s 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.065 ************************************ 00:05:38.065 END TEST cpu_locks 00:05:38.065 ************************************ 00:05:38.065 18:18:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.065 ************************************ 00:05:38.065 END TEST event 00:05:38.065 ************************************ 00:05:38.065 00:05:38.065 real 0m50.447s 00:05:38.065 user 1m37.602s 00:05:38.065 sys 0m9.715s 00:05:38.065 18:18:53 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:38.065 18:18:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.065 18:18:53 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:38.065 18:18:53 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:38.065 18:18:53 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.065 18:18:53 -- common/autotest_common.sh@10 -- # set +x 00:05:38.065 ************************************ 00:05:38.065 START TEST thread 00:05:38.065 ************************************ 00:05:38.065 18:18:53 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:38.323 * Looking for test storage... 00:05:38.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:38.323 18:18:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.323 18:18:54 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:38.323 18:18:54 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:38.323 18:18:54 thread -- common/autotest_common.sh@10 -- # set +x 00:05:38.323 ************************************ 00:05:38.323 START TEST thread_poller_perf 00:05:38.323 ************************************ 00:05:38.323 18:18:54 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:38.323 [2024-05-13 18:18:54.098517] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:38.323 [2024-05-13 18:18:54.098821] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63453 ] 00:05:38.323 [2024-05-13 18:18:54.242841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.581 [2024-05-13 18:18:54.347455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.581 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:39.955 ====================================== 00:05:39.955 busy:2210947834 (cyc) 00:05:39.955 total_run_count: 316000 00:05:39.955 tsc_hz: 2200000000 (cyc) 00:05:39.955 ====================================== 00:05:39.955 poller_cost: 6996 (cyc), 3180 (nsec) 00:05:39.955 00:05:39.955 real 0m1.389s 00:05:39.955 user 0m1.216s 00:05:39.955 sys 0m0.065s 00:05:39.955 18:18:55 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:39.955 18:18:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:39.955 ************************************ 00:05:39.955 END TEST thread_poller_perf 00:05:39.955 ************************************ 00:05:39.955 18:18:55 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:39.955 18:18:55 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:39.955 18:18:55 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:39.955 18:18:55 thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.955 ************************************ 00:05:39.955 START TEST thread_poller_perf 00:05:39.955 ************************************ 00:05:39.955 18:18:55 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:39.955 [2024-05-13 18:18:55.548308] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:39.955 [2024-05-13 18:18:55.548483] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63489 ] 00:05:39.955 [2024-05-13 18:18:55.689456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.955 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:39.955 [2024-05-13 18:18:55.792528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.330 ====================================== 00:05:41.330 busy:2202017605 (cyc) 00:05:41.330 total_run_count: 4204000 00:05:41.330 tsc_hz: 2200000000 (cyc) 00:05:41.330 ====================================== 00:05:41.330 poller_cost: 523 (cyc), 237 (nsec) 00:05:41.330 00:05:41.330 real 0m1.383s 00:05:41.330 user 0m1.213s 00:05:41.330 sys 0m0.063s 00:05:41.330 18:18:56 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.330 ************************************ 00:05:41.330 18:18:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:41.330 END TEST thread_poller_perf 00:05:41.330 ************************************ 00:05:41.330 18:18:56 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:41.330 ************************************ 00:05:41.330 END TEST thread 00:05:41.330 ************************************ 00:05:41.330 00:05:41.330 real 0m2.973s 00:05:41.330 user 0m2.516s 00:05:41.330 sys 0m0.233s 00:05:41.330 18:18:56 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:41.330 18:18:56 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.330 18:18:56 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:41.330 18:18:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.330 18:18:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.330 18:18:56 -- common/autotest_common.sh@10 -- # set +x 00:05:41.330 ************************************ 00:05:41.330 START TEST accel 00:05:41.330 ************************************ 00:05:41.330 18:18:56 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:05:41.330 * Looking for test storage... 00:05:41.330 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:41.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.331 18:18:57 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:41.331 18:18:57 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:41.331 18:18:57 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:41.331 18:18:57 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=63563 00:05:41.331 18:18:57 accel -- accel/accel.sh@63 -- # waitforlisten 63563 00:05:41.331 18:18:57 accel -- common/autotest_common.sh@827 -- # '[' -z 63563 ']' 00:05:41.331 18:18:57 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.331 18:18:57 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:41.331 18:18:57 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:41.331 18:18:57 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.331 18:18:57 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:41.331 18:18:57 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:41.331 18:18:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.331 18:18:57 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.331 18:18:57 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.331 18:18:57 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.331 18:18:57 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.331 18:18:57 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.331 18:18:57 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:41.331 18:18:57 accel -- accel/accel.sh@41 -- # jq -r . 00:05:41.331 [2024-05-13 18:18:57.154958] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:41.331 [2024-05-13 18:18:57.155526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63563 ] 00:05:41.589 [2024-05-13 18:18:57.288270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.589 [2024-05-13 18:18:57.403808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.524 18:18:58 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:42.524 18:18:58 accel -- common/autotest_common.sh@860 -- # return 0 00:05:42.524 18:18:58 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:42.524 18:18:58 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:42.524 18:18:58 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:42.524 18:18:58 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:42.524 18:18:58 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:42.524 18:18:58 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:42.524 18:18:58 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:42.524 18:18:58 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.524 18:18:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.524 18:18:58 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # IFS== 00:05:42.524 18:18:58 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:42.524 18:18:58 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:42.524 18:18:58 accel -- accel/accel.sh@75 -- # killprocess 63563 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@946 -- # '[' -z 63563 ']' 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@950 -- # kill -0 63563 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@951 -- # uname 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 63563 00:05:42.525 killing process with pid 63563 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 63563' 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@965 -- # kill 63563 00:05:42.525 18:18:58 accel -- common/autotest_common.sh@970 -- # wait 63563 00:05:43.090 18:18:58 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:43.090 18:18:58 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:43.090 18:18:58 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:43.090 18:18:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.090 18:18:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.090 18:18:58 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:43.090 18:18:58 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:43.090 18:18:58 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.090 18:18:58 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:43.090 18:18:58 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:43.090 18:18:58 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:43.091 18:18:58 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.091 18:18:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.091 ************************************ 00:05:43.091 START TEST accel_missing_filename 00:05:43.091 ************************************ 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.091 18:18:58 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:43.091 18:18:58 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:43.091 [2024-05-13 18:18:58.853794] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:43.091 [2024-05-13 18:18:58.855064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63633 ] 00:05:43.091 [2024-05-13 18:18:58.999436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.348 [2024-05-13 18:18:59.116686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.348 [2024-05-13 18:18:59.175631] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:43.348 [2024-05-13 18:18:59.263237] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:43.606 A filename is required. 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:43.606 ************************************ 00:05:43.606 END TEST accel_missing_filename 00:05:43.606 ************************************ 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:43.606 00:05:43.606 real 0m0.560s 00:05:43.606 user 0m0.392s 00:05:43.606 sys 0m0.126s 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.606 18:18:59 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:43.606 18:18:59 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:43.606 18:18:59 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:43.606 18:18:59 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.606 18:18:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:43.606 ************************************ 00:05:43.606 START TEST accel_compress_verify 00:05:43.606 ************************************ 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:43.606 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:43.606 18:18:59 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:43.606 [2024-05-13 18:18:59.459982] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:43.606 [2024-05-13 18:18:59.460059] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63657 ] 00:05:43.864 [2024-05-13 18:18:59.590452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.864 [2024-05-13 18:18:59.721074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.864 [2024-05-13 18:18:59.784412] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:44.122 [2024-05-13 18:18:59.870728] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:05:44.122 00:05:44.122 Compression does not support the verify option, aborting. 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.122 00:05:44.122 real 0m0.561s 00:05:44.122 user 0m0.378s 00:05:44.122 sys 0m0.127s 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.122 18:18:59 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:44.122 ************************************ 00:05:44.122 END TEST accel_compress_verify 00:05:44.122 ************************************ 00:05:44.122 18:19:00 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:44.122 18:19:00 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:44.122 18:19:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.122 18:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.122 ************************************ 00:05:44.122 START TEST accel_wrong_workload 00:05:44.122 ************************************ 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.122 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:44.122 18:19:00 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:44.380 Unsupported workload type: foobar 00:05:44.380 [2024-05-13 18:19:00.081805] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:44.380 accel_perf options: 00:05:44.380 [-h help message] 00:05:44.380 [-q queue depth per core] 00:05:44.380 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:44.380 [-T number of threads per core 00:05:44.380 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:44.380 [-t time in seconds] 00:05:44.380 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:44.380 [ dif_verify, , dif_generate, dif_generate_copy 00:05:44.380 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:44.380 [-l for compress/decompress workloads, name of uncompressed input file 00:05:44.380 [-S for crc32c workload, use this seed value (default 0) 00:05:44.380 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:44.380 [-f for fill workload, use this BYTE value (default 255) 00:05:44.380 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:44.380 [-y verify result if this switch is on] 00:05:44.380 [-a tasks to allocate per core (default: same value as -q)] 00:05:44.380 Can be used to spread operations across a wider range of memory. 00:05:44.380 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:44.380 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.380 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.380 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.380 00:05:44.380 real 0m0.039s 00:05:44.380 user 0m0.025s 00:05:44.380 sys 0m0.012s 00:05:44.380 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.380 18:19:00 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:44.380 ************************************ 00:05:44.380 END TEST accel_wrong_workload 00:05:44.380 ************************************ 00:05:44.380 18:19:00 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:44.380 18:19:00 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:05:44.380 18:19:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.380 18:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.380 ************************************ 00:05:44.380 START TEST accel_negative_buffers 00:05:44.380 ************************************ 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.380 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:44.380 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:44.381 18:19:00 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:44.381 -x option must be non-negative. 00:05:44.381 [2024-05-13 18:19:00.168542] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:44.381 accel_perf options: 00:05:44.381 [-h help message] 00:05:44.381 [-q queue depth per core] 00:05:44.381 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:44.381 [-T number of threads per core 00:05:44.381 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:44.381 [-t time in seconds] 00:05:44.381 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:44.381 [ dif_verify, , dif_generate, dif_generate_copy 00:05:44.381 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:44.381 [-l for compress/decompress workloads, name of uncompressed input file 00:05:44.381 [-S for crc32c workload, use this seed value (default 0) 00:05:44.381 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:44.381 [-f for fill workload, use this BYTE value (default 255) 00:05:44.381 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:44.381 [-y verify result if this switch is on] 00:05:44.381 [-a tasks to allocate per core (default: same value as -q)] 00:05:44.381 Can be used to spread operations across a wider range of memory. 00:05:44.381 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:44.381 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.381 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.381 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.381 00:05:44.381 real 0m0.039s 00:05:44.381 user 0m0.022s 00:05:44.381 sys 0m0.016s 00:05:44.381 ************************************ 00:05:44.381 END TEST accel_negative_buffers 00:05:44.381 ************************************ 00:05:44.381 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:44.381 18:19:00 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:44.381 18:19:00 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:44.381 18:19:00 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:44.381 18:19:00 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:44.381 18:19:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.381 ************************************ 00:05:44.381 START TEST accel_crc32c 00:05:44.381 ************************************ 00:05:44.381 18:19:00 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:44.381 18:19:00 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:44.381 [2024-05-13 18:19:00.252986] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:44.381 [2024-05-13 18:19:00.253796] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63721 ] 00:05:44.639 [2024-05-13 18:19:00.401435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.639 [2024-05-13 18:19:00.526342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.897 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:44.898 18:19:00 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:45.850 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:46.119 18:19:01 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.119 00:05:46.119 real 0m1.553s 00:05:46.119 user 0m1.339s 00:05:46.119 sys 0m0.118s 00:05:46.119 18:19:01 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:46.119 ************************************ 00:05:46.119 END TEST accel_crc32c 00:05:46.119 ************************************ 00:05:46.119 18:19:01 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:46.119 18:19:01 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:46.119 18:19:01 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:46.119 18:19:01 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.119 18:19:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:46.119 ************************************ 00:05:46.119 START TEST accel_crc32c_C2 00:05:46.119 ************************************ 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:46.119 18:19:01 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:46.119 [2024-05-13 18:19:01.849922] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:46.119 [2024-05-13 18:19:01.850009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63756 ] 00:05:46.119 [2024-05-13 18:19:01.987294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.378 [2024-05-13 18:19:02.126244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:46.378 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:46.379 18:19:02 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 ************************************ 00:05:47.754 END TEST accel_crc32c_C2 00:05:47.754 ************************************ 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.754 00:05:47.754 real 0m1.550s 00:05:47.754 user 0m1.329s 00:05:47.754 sys 0m0.128s 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:47.754 18:19:03 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:47.754 18:19:03 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:47.754 18:19:03 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:47.754 18:19:03 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:47.754 18:19:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.754 ************************************ 00:05:47.754 START TEST accel_copy 00:05:47.754 ************************************ 00:05:47.754 18:19:03 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:47.754 18:19:03 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:47.754 [2024-05-13 18:19:03.451283] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:47.754 [2024-05-13 18:19:03.451383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63790 ] 00:05:47.754 [2024-05-13 18:19:03.587768] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.011 [2024-05-13 18:19:03.698219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:48.011 18:19:03 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.384 ************************************ 00:05:49.384 END TEST accel_copy 00:05:49.384 ************************************ 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.384 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:49.385 18:19:04 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.385 00:05:49.385 real 0m1.516s 00:05:49.385 user 0m1.306s 00:05:49.385 sys 0m0.113s 00:05:49.385 18:19:04 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:49.385 18:19:04 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:49.385 18:19:04 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.385 18:19:04 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:49.385 18:19:04 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:49.385 18:19:04 accel -- common/autotest_common.sh@10 -- # set +x 00:05:49.385 ************************************ 00:05:49.385 START TEST accel_fill 00:05:49.385 ************************************ 00:05:49.385 18:19:04 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.385 18:19:04 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:49.385 18:19:04 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:49.385 18:19:04 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:04 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.385 18:19:04 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:49.385 [2024-05-13 18:19:05.022055] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:49.385 [2024-05-13 18:19:05.022711] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63825 ] 00:05:49.385 [2024-05-13 18:19:05.163292] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.385 [2024-05-13 18:19:05.268202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.385 18:19:05 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:49.662 18:19:05 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.595 18:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.595 18:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.595 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.595 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:50.596 18:19:06 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.596 00:05:50.596 real 0m1.523s 00:05:50.596 user 0m1.307s 00:05:50.596 sys 0m0.121s 00:05:50.596 18:19:06 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:50.596 ************************************ 00:05:50.596 END TEST accel_fill 00:05:50.596 ************************************ 00:05:50.596 18:19:06 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:50.854 18:19:06 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:50.854 18:19:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:50.854 18:19:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:50.854 18:19:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.854 ************************************ 00:05:50.854 START TEST accel_copy_crc32c 00:05:50.854 ************************************ 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:50.854 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:50.854 [2024-05-13 18:19:06.594421] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:50.854 [2024-05-13 18:19:06.594558] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63859 ] 00:05:50.854 [2024-05-13 18:19:06.732037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.113 [2024-05-13 18:19:06.849819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.113 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:51.114 18:19:06 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:52.488 ************************************ 00:05:52.488 END TEST accel_copy_crc32c 00:05:52.488 ************************************ 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.488 00:05:52.488 real 0m1.530s 00:05:52.488 user 0m1.312s 00:05:52.488 sys 0m0.126s 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:52.488 18:19:08 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:52.488 18:19:08 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:52.488 18:19:08 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:52.488 18:19:08 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.488 18:19:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:52.488 ************************************ 00:05:52.488 START TEST accel_copy_crc32c_C2 00:05:52.488 ************************************ 00:05:52.488 18:19:08 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:52.488 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.488 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:52.488 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.488 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:52.488 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:52.489 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:52.489 [2024-05-13 18:19:08.172137] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:52.489 [2024-05-13 18:19:08.172235] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63894 ] 00:05:52.489 [2024-05-13 18:19:08.312476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.747 [2024-05-13 18:19:08.435768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:52.747 18:19:08 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.118 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.118 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.118 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:54.119 00:05:54.119 real 0m1.543s 00:05:54.119 user 0m0.013s 00:05:54.119 sys 0m0.006s 00:05:54.119 ************************************ 00:05:54.119 END TEST accel_copy_crc32c_C2 00:05:54.119 ************************************ 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.119 18:19:09 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:54.119 18:19:09 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:54.119 18:19:09 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:54.119 18:19:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.119 18:19:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:54.119 ************************************ 00:05:54.119 START TEST accel_dualcast 00:05:54.119 ************************************ 00:05:54.119 18:19:09 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:54.119 18:19:09 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:54.119 [2024-05-13 18:19:09.759044] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:54.119 [2024-05-13 18:19:09.759250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63934 ] 00:05:54.119 [2024-05-13 18:19:09.893189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.119 [2024-05-13 18:19:10.012737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:54.377 18:19:10 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:55.788 18:19:11 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.788 00:05:55.788 real 0m1.545s 00:05:55.788 user 0m1.329s 00:05:55.788 sys 0m0.116s 00:05:55.788 18:19:11 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.788 18:19:11 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:55.788 ************************************ 00:05:55.788 END TEST accel_dualcast 00:05:55.788 ************************************ 00:05:55.788 18:19:11 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:55.788 18:19:11 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:55.788 18:19:11 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.788 18:19:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:55.788 ************************************ 00:05:55.788 START TEST accel_compare 00:05:55.788 ************************************ 00:05:55.788 18:19:11 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:55.788 [2024-05-13 18:19:11.365454] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:55.788 [2024-05-13 18:19:11.365605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63963 ] 00:05:55.788 [2024-05-13 18:19:11.504332] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.788 [2024-05-13 18:19:11.637469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.788 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.789 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.789 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:55.789 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:55.789 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:55.789 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:55.789 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:56.086 18:19:11 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.019 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:57.020 18:19:12 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.020 00:05:57.020 real 0m1.565s 00:05:57.020 user 0m1.341s 00:05:57.020 sys 0m0.128s 00:05:57.020 18:19:12 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:57.020 ************************************ 00:05:57.020 END TEST accel_compare 00:05:57.020 ************************************ 00:05:57.020 18:19:12 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:57.020 18:19:12 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:57.020 18:19:12 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:57.020 18:19:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:57.020 18:19:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:57.020 ************************************ 00:05:57.020 START TEST accel_xor 00:05:57.020 ************************************ 00:05:57.020 18:19:12 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:57.020 18:19:12 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:57.278 [2024-05-13 18:19:12.971961] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:57.278 [2024-05-13 18:19:12.972056] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64005 ] 00:05:57.278 [2024-05-13 18:19:13.104503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.538 [2024-05-13 18:19:13.222197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:57.538 18:19:13 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.952 00:05:58.952 real 0m1.630s 00:05:58.952 user 0m1.424s 00:05:58.952 sys 0m0.111s 00:05:58.952 18:19:14 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:58.952 18:19:14 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:58.952 ************************************ 00:05:58.952 END TEST accel_xor 00:05:58.952 ************************************ 00:05:58.952 18:19:14 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:58.952 18:19:14 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:58.952 18:19:14 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:58.952 18:19:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:58.952 ************************************ 00:05:58.952 START TEST accel_xor 00:05:58.952 ************************************ 00:05:58.952 18:19:14 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:58.952 18:19:14 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:58.952 [2024-05-13 18:19:14.657201] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:05:58.952 [2024-05-13 18:19:14.657285] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64034 ] 00:05:58.952 [2024-05-13 18:19:14.791354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.209 [2024-05-13 18:19:14.914019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:14 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.209 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:59.210 18:19:15 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:00.581 18:19:16 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.581 00:06:00.581 real 0m1.647s 00:06:00.581 user 0m1.412s 00:06:00.581 sys 0m0.139s 00:06:00.581 18:19:16 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.581 ************************************ 00:06:00.581 END TEST accel_xor 00:06:00.581 ************************************ 00:06:00.581 18:19:16 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:00.581 18:19:16 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:00.581 18:19:16 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:00.581 18:19:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.581 18:19:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:00.581 ************************************ 00:06:00.581 START TEST accel_dif_verify 00:06:00.581 ************************************ 00:06:00.581 18:19:16 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:00.581 18:19:16 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:00.581 [2024-05-13 18:19:16.349134] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:00.582 [2024-05-13 18:19:16.349289] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64074 ] 00:06:00.582 [2024-05-13 18:19:16.489996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.840 [2024-05-13 18:19:16.642333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.840 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:00.841 18:19:16 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.214 18:19:17 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:02.214 18:19:18 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:02.214 00:06:02.214 real 0m1.677s 00:06:02.214 user 0m1.431s 00:06:02.214 sys 0m0.152s 00:06:02.214 18:19:18 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:02.214 ************************************ 00:06:02.214 END TEST accel_dif_verify 00:06:02.214 ************************************ 00:06:02.214 18:19:18 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:02.214 18:19:18 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:02.214 18:19:18 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:02.214 18:19:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:02.214 18:19:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.214 ************************************ 00:06:02.214 START TEST accel_dif_generate 00:06:02.214 ************************************ 00:06:02.214 18:19:18 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:02.214 18:19:18 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:02.215 [2024-05-13 18:19:18.073536] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:02.215 [2024-05-13 18:19:18.073665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64114 ] 00:06:02.472 [2024-05-13 18:19:18.213752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.472 [2024-05-13 18:19:18.376282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.730 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:02.731 18:19:18 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.104 18:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:04.105 18:19:19 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.105 00:06:04.105 real 0m1.699s 00:06:04.105 user 0m1.445s 00:06:04.105 sys 0m0.159s 00:06:04.105 18:19:19 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:04.105 ************************************ 00:06:04.105 END TEST accel_dif_generate 00:06:04.105 ************************************ 00:06:04.105 18:19:19 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:04.105 18:19:19 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:04.105 18:19:19 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:04.105 18:19:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:04.105 18:19:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.105 ************************************ 00:06:04.105 START TEST accel_dif_generate_copy 00:06:04.105 ************************************ 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:04.105 18:19:19 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:04.105 [2024-05-13 18:19:19.816993] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:04.105 [2024-05-13 18:19:19.817087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64143 ] 00:06:04.105 [2024-05-13 18:19:19.951870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.362 [2024-05-13 18:19:20.066736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.362 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:04.363 18:19:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.736 00:06:05.736 real 0m1.621s 00:06:05.736 user 0m1.384s 00:06:05.736 sys 0m0.139s 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:05.736 ************************************ 00:06:05.736 18:19:21 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:05.736 END TEST accel_dif_generate_copy 00:06:05.736 ************************************ 00:06:05.736 18:19:21 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:05.736 18:19:21 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.736 18:19:21 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:06:05.736 18:19:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:05.736 18:19:21 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.736 ************************************ 00:06:05.736 START TEST accel_comp 00:06:05.736 ************************************ 00:06:05.736 18:19:21 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:05.736 18:19:21 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:05.736 [2024-05-13 18:19:21.484942] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:05.736 [2024-05-13 18:19:21.485077] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64183 ] 00:06:05.736 [2024-05-13 18:19:21.625972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.995 [2024-05-13 18:19:21.792426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:05.995 18:19:21 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:07.369 18:19:23 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.369 00:06:07.369 real 0m1.712s 00:06:07.369 user 0m1.452s 00:06:07.369 sys 0m0.164s 00:06:07.369 18:19:23 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.369 18:19:23 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:07.369 ************************************ 00:06:07.369 END TEST accel_comp 00:06:07.369 ************************************ 00:06:07.369 18:19:23 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.369 18:19:23 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:06:07.369 18:19:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.369 18:19:23 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.369 ************************************ 00:06:07.369 START TEST accel_decomp 00:06:07.369 ************************************ 00:06:07.369 18:19:23 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:07.369 18:19:23 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:07.369 [2024-05-13 18:19:23.235742] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:07.369 [2024-05-13 18:19:23.235816] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64218 ] 00:06:07.627 [2024-05-13 18:19:23.365911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.627 [2024-05-13 18:19:23.522204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:07.884 18:19:23 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:09.255 18:19:24 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.255 00:06:09.255 real 0m1.679s 00:06:09.255 user 0m1.427s 00:06:09.255 sys 0m0.155s 00:06:09.255 18:19:24 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.255 18:19:24 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:09.255 ************************************ 00:06:09.255 END TEST accel_decomp 00:06:09.255 ************************************ 00:06:09.255 18:19:24 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:09.255 18:19:24 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:09.255 18:19:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.255 18:19:24 accel -- common/autotest_common.sh@10 -- # set +x 00:06:09.255 ************************************ 00:06:09.255 START TEST accel_decmop_full 00:06:09.255 ************************************ 00:06:09.255 18:19:24 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:09.255 18:19:24 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:06:09.255 18:19:24 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:06:09.256 18:19:24 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:06:09.256 [2024-05-13 18:19:24.967532] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:09.256 [2024-05-13 18:19:24.967647] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64252 ] 00:06:09.256 [2024-05-13 18:19:25.107519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.513 [2024-05-13 18:19:25.258931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.513 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:09.514 18:19:25 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:10.888 18:19:26 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.888 00:06:10.888 real 0m1.708s 00:06:10.888 user 0m0.015s 00:06:10.888 sys 0m0.005s 00:06:10.888 18:19:26 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:10.888 18:19:26 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:06:10.888 ************************************ 00:06:10.888 END TEST accel_decmop_full 00:06:10.888 ************************************ 00:06:10.888 18:19:26 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:10.888 18:19:26 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:10.888 18:19:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.888 18:19:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.888 ************************************ 00:06:10.888 START TEST accel_decomp_mcore 00:06:10.888 ************************************ 00:06:10.888 18:19:26 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:10.888 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:10.888 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:10.888 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:10.889 18:19:26 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:10.889 [2024-05-13 18:19:26.723056] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:10.889 [2024-05-13 18:19:26.723148] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64292 ] 00:06:11.147 [2024-05-13 18:19:26.860998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:11.147 [2024-05-13 18:19:27.021604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.147 [2024-05-13 18:19:27.021719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.147 [2024-05-13 18:19:27.021809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.147 [2024-05-13 18:19:27.021813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.406 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:11.407 18:19:27 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.780 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.781 00:06:12.781 real 0m1.705s 00:06:12.781 user 0m5.033s 00:06:12.781 sys 0m0.165s 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.781 18:19:28 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:12.781 ************************************ 00:06:12.781 END TEST accel_decomp_mcore 00:06:12.781 ************************************ 00:06:12.781 18:19:28 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.781 18:19:28 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:12.781 18:19:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.781 18:19:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.781 ************************************ 00:06:12.781 START TEST accel_decomp_full_mcore 00:06:12.781 ************************************ 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:12.781 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:12.781 [2024-05-13 18:19:28.475210] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:12.781 [2024-05-13 18:19:28.475335] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64330 ] 00:06:12.781 [2024-05-13 18:19:28.618245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:13.039 [2024-05-13 18:19:28.777013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.039 [2024-05-13 18:19:28.777121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.039 [2024-05-13 18:19:28.777246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.039 [2024-05-13 18:19:28.777256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:13.039 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:13.040 18:19:28 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.413 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.414 00:06:14.414 real 0m1.708s 00:06:14.414 user 0m5.080s 00:06:14.414 sys 0m0.173s 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:14.414 18:19:30 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:14.414 ************************************ 00:06:14.414 END TEST accel_decomp_full_mcore 00:06:14.414 ************************************ 00:06:14.414 18:19:30 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:14.414 18:19:30 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:06:14.414 18:19:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:14.414 18:19:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.414 ************************************ 00:06:14.414 START TEST accel_decomp_mthread 00:06:14.414 ************************************ 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:14.414 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:14.414 [2024-05-13 18:19:30.230559] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:14.414 [2024-05-13 18:19:30.230682] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64373 ] 00:06:14.681 [2024-05-13 18:19:30.361225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.681 [2024-05-13 18:19:30.509802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:14.681 18:19:30 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:16.055 00:06:16.055 real 0m1.681s 00:06:16.055 user 0m1.444s 00:06:16.055 sys 0m0.146s 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.055 18:19:31 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:16.055 ************************************ 00:06:16.055 END TEST accel_decomp_mthread 00:06:16.055 ************************************ 00:06:16.055 18:19:31 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:16.055 18:19:31 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:06:16.055 18:19:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.055 18:19:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:16.055 ************************************ 00:06:16.055 START TEST accel_decomp_full_mthread 00:06:16.055 ************************************ 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:16.055 18:19:31 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:16.055 [2024-05-13 18:19:31.958760] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:16.055 [2024-05-13 18:19:31.958839] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64407 ] 00:06:16.313 [2024-05-13 18:19:32.091730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.313 [2024-05-13 18:19:32.240622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.573 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:16.574 18:19:32 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.949 00:06:17.949 real 0m1.716s 00:06:17.949 user 0m1.471s 00:06:17.949 sys 0m0.148s 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.949 18:19:33 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:17.949 ************************************ 00:06:17.949 END TEST accel_decomp_full_mthread 00:06:17.949 ************************************ 00:06:17.949 18:19:33 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:17.949 18:19:33 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:17.949 18:19:33 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:17.949 18:19:33 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:17.949 18:19:33 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.949 18:19:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.949 18:19:33 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.949 18:19:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.949 18:19:33 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.949 18:19:33 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.949 18:19:33 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.949 18:19:33 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:17.949 18:19:33 accel -- accel/accel.sh@41 -- # jq -r . 00:06:17.949 ************************************ 00:06:17.949 START TEST accel_dif_functional_tests 00:06:17.949 ************************************ 00:06:17.949 18:19:33 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:17.949 [2024-05-13 18:19:33.773782] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:17.949 [2024-05-13 18:19:33.773881] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64443 ] 00:06:18.211 [2024-05-13 18:19:33.913212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.211 [2024-05-13 18:19:34.071847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.211 [2024-05-13 18:19:34.072017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:18.211 [2024-05-13 18:19:34.072021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.469 00:06:18.469 00:06:18.469 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.469 http://cunit.sourceforge.net/ 00:06:18.469 00:06:18.469 00:06:18.469 Suite: accel_dif 00:06:18.469 Test: verify: DIF generated, GUARD check ...passed 00:06:18.469 Test: verify: DIF generated, APPTAG check ...passed 00:06:18.469 Test: verify: DIF generated, REFTAG check ...passed 00:06:18.469 Test: verify: DIF not generated, GUARD check ...[2024-05-13 18:19:34.202608] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:18.469 passed 00:06:18.469 Test: verify: DIF not generated, APPTAG check ...[2024-05-13 18:19:34.202922] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:18.469 [2024-05-13 18:19:34.202971] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:18.469 passed 00:06:18.469 Test: verify: DIF not generated, REFTAG check ...[2024-05-13 18:19:34.203006] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:18.469 passed 00:06:18.469 Test: verify: APPTAG correct, APPTAG check ...[2024-05-13 18:19:34.203033] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:18.469 [2024-05-13 18:19:34.203063] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:18.469 passed 00:06:18.469 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-13 18:19:34.203329] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:18.469 passed 00:06:18.469 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:18.469 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:18.469 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:18.469 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-13 18:19:34.203558] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:18.469 passed 00:06:18.469 Test: generate copy: DIF generated, GUARD check ...passed 00:06:18.469 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:18.469 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:18.469 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:18.469 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:18.469 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:18.469 Test: generate copy: iovecs-len validate ...[2024-05-13 18:19:34.203916] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:18.469 passed 00:06:18.469 Test: generate copy: buffer alignment validate ...passed 00:06:18.469 00:06:18.469 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.469 suites 1 1 n/a 0 0 00:06:18.469 tests 20 20 20 0 0 00:06:18.469 asserts 204 204 204 0 n/a 00:06:18.469 00:06:18.469 Elapsed time = 0.004 seconds 00:06:18.727 00:06:18.727 real 0m0.841s 00:06:18.727 user 0m1.171s 00:06:18.727 sys 0m0.221s 00:06:18.727 18:19:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.727 18:19:34 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:18.727 ************************************ 00:06:18.727 END TEST accel_dif_functional_tests 00:06:18.727 ************************************ 00:06:18.727 00:06:18.727 real 0m37.598s 00:06:18.727 user 0m39.413s 00:06:18.727 sys 0m4.521s 00:06:18.727 18:19:34 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:18.727 18:19:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.727 ************************************ 00:06:18.727 END TEST accel 00:06:18.727 ************************************ 00:06:18.727 18:19:34 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:18.727 18:19:34 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:18.727 18:19:34 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:18.727 18:19:34 -- common/autotest_common.sh@10 -- # set +x 00:06:18.727 ************************************ 00:06:18.727 START TEST accel_rpc 00:06:18.727 ************************************ 00:06:18.727 18:19:34 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:06:18.985 * Looking for test storage... 00:06:18.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:18.985 18:19:34 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.985 18:19:34 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64513 00:06:18.985 18:19:34 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:18.985 18:19:34 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 64513 00:06:18.985 18:19:34 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 64513 ']' 00:06:18.985 18:19:34 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.985 18:19:34 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:18.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.986 18:19:34 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.986 18:19:34 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:18.986 18:19:34 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.986 [2024-05-13 18:19:34.797649] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:18.986 [2024-05-13 18:19:34.797742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64513 ] 00:06:19.243 [2024-05-13 18:19:34.931242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.243 [2024-05-13 18:19:35.042820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.176 18:19:35 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:20.176 18:19:35 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:20.176 18:19:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:20.176 18:19:35 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:20.176 18:19:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:20.176 18:19:35 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:20.176 18:19:35 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:20.176 18:19:35 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:20.176 18:19:35 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:20.176 18:19:35 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.176 ************************************ 00:06:20.176 START TEST accel_assign_opcode 00:06:20.176 ************************************ 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.176 [2024-05-13 18:19:35.839348] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.176 [2024-05-13 18:19:35.847323] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.176 18:19:35 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.176 18:19:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.176 18:19:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:20.177 18:19:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:20.177 18:19:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.177 18:19:36 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:20.177 18:19:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.177 18:19:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.435 software 00:06:20.435 00:06:20.435 real 0m0.323s 00:06:20.435 user 0m0.079s 00:06:20.435 sys 0m0.009s 00:06:20.435 18:19:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:20.435 18:19:36 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:20.435 ************************************ 00:06:20.435 END TEST accel_assign_opcode 00:06:20.435 ************************************ 00:06:20.435 18:19:36 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 64513 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 64513 ']' 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 64513 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64513 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.435 killing process with pid 64513 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64513' 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@965 -- # kill 64513 00:06:20.435 18:19:36 accel_rpc -- common/autotest_common.sh@970 -- # wait 64513 00:06:21.000 00:06:21.000 real 0m1.995s 00:06:21.000 user 0m2.144s 00:06:21.000 sys 0m0.470s 00:06:21.000 18:19:36 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:21.000 ************************************ 00:06:21.000 END TEST accel_rpc 00:06:21.000 ************************************ 00:06:21.001 18:19:36 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.001 18:19:36 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:21.001 18:19:36 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:21.001 18:19:36 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:21.001 18:19:36 -- common/autotest_common.sh@10 -- # set +x 00:06:21.001 ************************************ 00:06:21.001 START TEST app_cmdline 00:06:21.001 ************************************ 00:06:21.001 18:19:36 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:21.001 * Looking for test storage... 00:06:21.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:21.001 18:19:36 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.001 18:19:36 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=64624 00:06:21.001 18:19:36 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 64624 00:06:21.001 18:19:36 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 64624 ']' 00:06:21.001 18:19:36 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.001 18:19:36 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.001 18:19:36 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:21.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.001 18:19:36 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.001 18:19:36 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:21.001 18:19:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.001 [2024-05-13 18:19:36.834828] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:21.001 [2024-05-13 18:19:36.834933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64624 ] 00:06:21.259 [2024-05-13 18:19:36.969423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.259 [2024-05-13 18:19:37.098343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.194 18:19:37 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:22.194 18:19:37 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:06:22.194 18:19:37 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:22.450 { 00:06:22.450 "fields": { 00:06:22.450 "commit": "b084cba07", 00:06:22.450 "major": 24, 00:06:22.450 "minor": 5, 00:06:22.450 "patch": 0, 00:06:22.450 "suffix": "-pre" 00:06:22.450 }, 00:06:22.451 "version": "SPDK v24.05-pre git sha1 b084cba07" 00:06:22.451 } 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.451 18:19:38 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:22.451 18:19:38 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.708 2024/05/13 18:19:38 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:06:22.708 request: 00:06:22.708 { 00:06:22.708 "method": "env_dpdk_get_mem_stats", 00:06:22.708 "params": {} 00:06:22.708 } 00:06:22.708 Got JSON-RPC error response 00:06:22.708 GoRPCClient: error on JSON-RPC call 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.708 18:19:38 app_cmdline -- app/cmdline.sh@1 -- # killprocess 64624 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 64624 ']' 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 64624 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64624 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:22.708 killing process with pid 64624 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64624' 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@965 -- # kill 64624 00:06:22.708 18:19:38 app_cmdline -- common/autotest_common.sh@970 -- # wait 64624 00:06:23.273 00:06:23.273 real 0m2.308s 00:06:23.273 user 0m2.975s 00:06:23.273 sys 0m0.499s 00:06:23.273 18:19:39 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.273 18:19:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:23.273 ************************************ 00:06:23.273 END TEST app_cmdline 00:06:23.273 ************************************ 00:06:23.273 18:19:39 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:23.273 18:19:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:23.273 18:19:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.273 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.273 ************************************ 00:06:23.273 START TEST version 00:06:23.273 ************************************ 00:06:23.273 18:19:39 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:23.273 * Looking for test storage... 00:06:23.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:23.273 18:19:39 version -- app/version.sh@17 -- # get_header_version major 00:06:23.273 18:19:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # cut -f2 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.273 18:19:39 version -- app/version.sh@17 -- # major=24 00:06:23.273 18:19:39 version -- app/version.sh@18 -- # get_header_version minor 00:06:23.273 18:19:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # cut -f2 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.273 18:19:39 version -- app/version.sh@18 -- # minor=5 00:06:23.273 18:19:39 version -- app/version.sh@19 -- # get_header_version patch 00:06:23.273 18:19:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # cut -f2 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.273 18:19:39 version -- app/version.sh@19 -- # patch=0 00:06:23.273 18:19:39 version -- app/version.sh@20 -- # get_header_version suffix 00:06:23.273 18:19:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # cut -f2 00:06:23.273 18:19:39 version -- app/version.sh@14 -- # tr -d '"' 00:06:23.273 18:19:39 version -- app/version.sh@20 -- # suffix=-pre 00:06:23.273 18:19:39 version -- app/version.sh@22 -- # version=24.5 00:06:23.273 18:19:39 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:23.273 18:19:39 version -- app/version.sh@28 -- # version=24.5rc0 00:06:23.273 18:19:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:23.273 18:19:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:23.531 18:19:39 version -- app/version.sh@30 -- # py_version=24.5rc0 00:06:23.531 18:19:39 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:06:23.531 00:06:23.531 real 0m0.172s 00:06:23.531 user 0m0.118s 00:06:23.531 sys 0m0.089s 00:06:23.531 18:19:39 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:23.531 18:19:39 version -- common/autotest_common.sh@10 -- # set +x 00:06:23.531 ************************************ 00:06:23.531 END TEST version 00:06:23.531 ************************************ 00:06:23.531 18:19:39 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@194 -- # uname -s 00:06:23.531 18:19:39 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:23.531 18:19:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.531 18:19:39 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:23.531 18:19:39 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@258 -- # timing_exit lib 00:06:23.531 18:19:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:23.531 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.531 18:19:39 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:06:23.531 18:19:39 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:06:23.531 18:19:39 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.531 18:19:39 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:23.531 18:19:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.531 18:19:39 -- common/autotest_common.sh@10 -- # set +x 00:06:23.531 ************************************ 00:06:23.531 START TEST nvmf_tcp 00:06:23.531 ************************************ 00:06:23.531 18:19:39 nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:23.531 * Looking for test storage... 00:06:23.531 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.531 18:19:39 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.531 18:19:39 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.531 18:19:39 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.531 18:19:39 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.531 18:19:39 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.531 18:19:39 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.531 18:19:39 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:06:23.531 18:19:39 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:23.531 18:19:39 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:23.531 18:19:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:23.531 18:19:39 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:23.531 18:19:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:23.531 18:19:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:23.531 18:19:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.531 ************************************ 00:06:23.531 START TEST nvmf_example 00:06:23.531 ************************************ 00:06:23.531 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:23.790 * Looking for test storage... 00:06:23.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:23.790 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:23.791 Cannot find device "nvmf_init_br" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@154 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:23.791 Cannot find device "nvmf_tgt_br" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@155 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:23.791 Cannot find device "nvmf_tgt_br2" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@156 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:23.791 Cannot find device "nvmf_init_br" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@157 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:23.791 Cannot find device "nvmf_tgt_br" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@158 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:23.791 Cannot find device "nvmf_tgt_br2" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@159 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:23.791 Cannot find device "nvmf_br" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@160 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:23.791 Cannot find device "nvmf_init_if" 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@161 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:23.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@162 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:23.791 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@163 -- # true 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:23.791 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:24.049 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:24.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:24.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:06:24.050 00:06:24.050 --- 10.0.0.2 ping statistics --- 00:06:24.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.050 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:24.050 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:24.050 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:06:24.050 00:06:24.050 --- 10.0.0.3 ping statistics --- 00:06:24.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.050 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:24.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:24.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:06:24.050 00:06:24.050 --- 10.0.0.1 ping statistics --- 00:06:24.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:24.050 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@433 -- # return 0 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=64982 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 64982 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 64982 ']' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:24.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:24.050 18:19:39 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.422 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:06:25.423 18:19:41 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:37.614 Initializing NVMe Controllers 00:06:37.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:37.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:37.615 Initialization complete. Launching workers. 00:06:37.615 ======================================================== 00:06:37.615 Latency(us) 00:06:37.615 Device Information : IOPS MiB/s Average min max 00:06:37.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14921.42 58.29 4288.81 744.27 20268.30 00:06:37.615 ======================================================== 00:06:37.615 Total : 14921.42 58.29 4288.81 744.27 20268.30 00:06:37.615 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:37.615 rmmod nvme_tcp 00:06:37.615 rmmod nvme_fabrics 00:06:37.615 rmmod nvme_keyring 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 64982 ']' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 64982 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 64982 ']' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 64982 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 64982 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:06:37.615 killing process with pid 64982 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 64982' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 64982 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 64982 00:06:37.615 nvmf threads initialize successfully 00:06:37.615 bdev subsystem init successfully 00:06:37.615 created a nvmf target service 00:06:37.615 create targets's poll groups done 00:06:37.615 all subsystems of target started 00:06:37.615 nvmf target is running 00:06:37.615 all subsystems of target stopped 00:06:37.615 destroy targets's poll groups done 00:06:37.615 destroyed the nvmf target service 00:06:37.615 bdev subsystem finish successfully 00:06:37.615 nvmf threads destroy successfully 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:37.615 00:06:37.615 real 0m12.378s 00:06:37.615 user 0m44.431s 00:06:37.615 sys 0m2.001s 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.615 18:19:51 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:06:37.615 ************************************ 00:06:37.615 END TEST nvmf_example 00:06:37.615 ************************************ 00:06:37.615 18:19:51 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:37.615 18:19:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:37.615 18:19:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.615 18:19:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.615 ************************************ 00:06:37.615 START TEST nvmf_filesystem 00:06:37.615 ************************************ 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:37.615 * Looking for test storage... 00:06:37.615 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:37.615 #define SPDK_CONFIG_H 00:06:37.615 #define SPDK_CONFIG_APPS 1 00:06:37.615 #define SPDK_CONFIG_ARCH native 00:06:37.615 #undef SPDK_CONFIG_ASAN 00:06:37.615 #define SPDK_CONFIG_AVAHI 1 00:06:37.615 #undef SPDK_CONFIG_CET 00:06:37.615 #define SPDK_CONFIG_COVERAGE 1 00:06:37.615 #define SPDK_CONFIG_CROSS_PREFIX 00:06:37.615 #undef SPDK_CONFIG_CRYPTO 00:06:37.615 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:37.615 #undef SPDK_CONFIG_CUSTOMOCF 00:06:37.615 #undef SPDK_CONFIG_DAOS 00:06:37.615 #define SPDK_CONFIG_DAOS_DIR 00:06:37.615 #define SPDK_CONFIG_DEBUG 1 00:06:37.615 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:37.615 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:06:37.615 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:37.615 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:37.615 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:37.615 #undef SPDK_CONFIG_DPDK_UADK 00:06:37.615 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:06:37.615 #define SPDK_CONFIG_EXAMPLES 1 00:06:37.615 #undef SPDK_CONFIG_FC 00:06:37.615 #define SPDK_CONFIG_FC_PATH 00:06:37.615 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:37.615 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:37.615 #undef SPDK_CONFIG_FUSE 00:06:37.615 #undef SPDK_CONFIG_FUZZER 00:06:37.615 #define SPDK_CONFIG_FUZZER_LIB 00:06:37.615 #define SPDK_CONFIG_GOLANG 1 00:06:37.615 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:37.615 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:06:37.615 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:37.615 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:06:37.615 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:37.615 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:37.615 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:37.615 #define SPDK_CONFIG_IDXD 1 00:06:37.615 #undef SPDK_CONFIG_IDXD_KERNEL 00:06:37.615 #undef SPDK_CONFIG_IPSEC_MB 00:06:37.615 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:37.615 #define SPDK_CONFIG_ISAL 1 00:06:37.615 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:37.615 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:37.615 #define SPDK_CONFIG_LIBDIR 00:06:37.615 #undef SPDK_CONFIG_LTO 00:06:37.615 #define SPDK_CONFIG_MAX_LCORES 00:06:37.615 #define SPDK_CONFIG_NVME_CUSE 1 00:06:37.615 #undef SPDK_CONFIG_OCF 00:06:37.615 #define SPDK_CONFIG_OCF_PATH 00:06:37.615 #define SPDK_CONFIG_OPENSSL_PATH 00:06:37.615 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:37.615 #define SPDK_CONFIG_PGO_DIR 00:06:37.615 #undef SPDK_CONFIG_PGO_USE 00:06:37.615 #define SPDK_CONFIG_PREFIX /usr/local 00:06:37.615 #undef SPDK_CONFIG_RAID5F 00:06:37.615 #undef SPDK_CONFIG_RBD 00:06:37.615 #define SPDK_CONFIG_RDMA 1 00:06:37.615 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:37.615 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:37.615 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:37.615 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:37.615 #define SPDK_CONFIG_SHARED 1 00:06:37.615 #undef SPDK_CONFIG_SMA 00:06:37.615 #define SPDK_CONFIG_TESTS 1 00:06:37.615 #undef SPDK_CONFIG_TSAN 00:06:37.615 #define SPDK_CONFIG_UBLK 1 00:06:37.615 #define SPDK_CONFIG_UBSAN 1 00:06:37.615 #undef SPDK_CONFIG_UNIT_TESTS 00:06:37.615 #undef SPDK_CONFIG_URING 00:06:37.615 #define SPDK_CONFIG_URING_PATH 00:06:37.615 #undef SPDK_CONFIG_URING_ZNS 00:06:37.615 #define SPDK_CONFIG_USDT 1 00:06:37.615 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:37.615 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:37.615 #define SPDK_CONFIG_VFIO_USER 1 00:06:37.615 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:37.615 #define SPDK_CONFIG_VHOST 1 00:06:37.615 #define SPDK_CONFIG_VIRTIO 1 00:06:37.615 #undef SPDK_CONFIG_VTUNE 00:06:37.615 #define SPDK_CONFIG_VTUNE_DIR 00:06:37.615 #define SPDK_CONFIG_WERROR 1 00:06:37.615 #define SPDK_CONFIG_WPDK_DIR 00:06:37.615 #undef SPDK_CONFIG_XNVME 00:06:37.615 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.615 18:19:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:06:37.616 18:19:51 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j10 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 65235 ]] 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 65235 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:06:37.616 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.aa5FtL 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.aa5FtL/tests/target /tmp/spdk.aa5FtL 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=devtmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=4194304 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=4194304 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6264508416 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267883520 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=3375104 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=2494349312 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=2507153408 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12804096 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13809553408 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5215105024 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda5 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=btrfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=13809553408 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=20314062848 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=5215105024 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda2 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext4 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=843546624 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1012768768 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=100016128 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6267748352 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6267883520 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=135168 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/vda3 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=vfat 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=92499968 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=104607744 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=12107776 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=1253572608 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=1253576704 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest_2/fedora38-libvirt/output 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=fuse.sshfs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=93699907584 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=105088212992 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=6002872320 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:06:37.617 * Looking for test storage... 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/home 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=13809553408 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == tmpfs ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ btrfs == ramfs ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ /home == / ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:37.617 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:37.618 Cannot find device "nvmf_tgt_br" 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@155 -- # true 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:37.618 Cannot find device "nvmf_tgt_br2" 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@156 -- # true 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:37.618 Cannot find device "nvmf_tgt_br" 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@158 -- # true 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:37.618 Cannot find device "nvmf_tgt_br2" 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@159 -- # true 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:37.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:37.618 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:37.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:06:37.618 00:06:37.618 --- 10.0.0.2 ping statistics --- 00:06:37.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.618 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:37.618 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:37.618 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:06:37.618 00:06:37.618 --- 10.0.0.3 ping statistics --- 00:06:37.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.618 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:37.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:06:37.618 00:06:37.618 --- 10.0.0.1 ping statistics --- 00:06:37.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.618 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@433 -- # return 0 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.618 ************************************ 00:06:37.618 START TEST nvmf_filesystem_no_in_capsule 00:06:37.618 ************************************ 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65392 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65392 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 65392 ']' 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:37.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:37.618 18:19:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.618 [2024-05-13 18:19:52.538419] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:37.618 [2024-05-13 18:19:52.538507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.618 [2024-05-13 18:19:52.677955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.618 [2024-05-13 18:19:52.812330] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.618 [2024-05-13 18:19:52.812600] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.618 [2024-05-13 18:19:52.812762] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.618 [2024-05-13 18:19:52.812925] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.618 [2024-05-13 18:19:52.812976] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.618 [2024-05-13 18:19:52.813202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.618 [2024-05-13 18:19:52.813351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.618 [2024-05-13 18:19:52.813520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.618 [2024-05-13 18:19:52.813522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.618 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.618 [2024-05-13 18:19:53.554404] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.876 Malloc1 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.876 [2024-05-13 18:19:53.742527] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:37.876 [2024-05-13 18:19:53.742796] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.876 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:37.876 { 00:06:37.876 "aliases": [ 00:06:37.876 "95539c0f-043c-4fb6-b739-2b17b56a5235" 00:06:37.876 ], 00:06:37.876 "assigned_rate_limits": { 00:06:37.876 "r_mbytes_per_sec": 0, 00:06:37.876 "rw_ios_per_sec": 0, 00:06:37.876 "rw_mbytes_per_sec": 0, 00:06:37.876 "w_mbytes_per_sec": 0 00:06:37.876 }, 00:06:37.876 "block_size": 512, 00:06:37.876 "claim_type": "exclusive_write", 00:06:37.876 "claimed": true, 00:06:37.876 "driver_specific": {}, 00:06:37.876 "memory_domains": [ 00:06:37.876 { 00:06:37.876 "dma_device_id": "system", 00:06:37.877 "dma_device_type": 1 00:06:37.877 }, 00:06:37.877 { 00:06:37.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:37.877 "dma_device_type": 2 00:06:37.877 } 00:06:37.877 ], 00:06:37.877 "name": "Malloc1", 00:06:37.877 "num_blocks": 1048576, 00:06:37.877 "product_name": "Malloc disk", 00:06:37.877 "supported_io_types": { 00:06:37.877 "abort": true, 00:06:37.877 "compare": false, 00:06:37.877 "compare_and_write": false, 00:06:37.877 "flush": true, 00:06:37.877 "nvme_admin": false, 00:06:37.877 "nvme_io": false, 00:06:37.877 "read": true, 00:06:37.877 "reset": true, 00:06:37.877 "unmap": true, 00:06:37.877 "write": true, 00:06:37.877 "write_zeroes": true 00:06:37.877 }, 00:06:37.877 "uuid": "95539c0f-043c-4fb6-b739-2b17b56a5235", 00:06:37.877 "zoned": false 00:06:37.877 } 00:06:37.877 ]' 00:06:37.877 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:37.877 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:38.134 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:38.134 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:38.134 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:38.134 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:38.134 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:38.134 18:19:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:38.134 18:19:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:38.134 18:19:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:38.134 18:19:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:38.134 18:19:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:38.134 18:19:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:40.658 18:19:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.604 ************************************ 00:06:41.604 START TEST filesystem_ext4 00:06:41.604 ************************************ 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:41.604 mke2fs 1.46.5 (30-Dec-2021) 00:06:41.604 Discarding device blocks: 0/522240 done 00:06:41.604 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:41.604 Filesystem UUID: cf65b767-1315-4937-9e3b-3c69aec7f6f4 00:06:41.604 Superblock backups stored on blocks: 00:06:41.604 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:41.604 00:06:41.604 Allocating group tables: 0/64 done 00:06:41.604 Writing inode tables: 0/64 done 00:06:41.604 Creating journal (8192 blocks): done 00:06:41.604 Writing superblocks and filesystem accounting information: 0/64 done 00:06:41.604 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.604 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 65392 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.862 00:06:41.862 real 0m0.402s 00:06:41.862 user 0m0.021s 00:06:41.862 sys 0m0.054s 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:41.862 ************************************ 00:06:41.862 END TEST filesystem_ext4 00:06:41.862 ************************************ 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:41.862 ************************************ 00:06:41.862 START TEST filesystem_btrfs 00:06:41.862 ************************************ 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:41.862 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:42.120 btrfs-progs v6.6.2 00:06:42.120 See https://btrfs.readthedocs.io for more information. 00:06:42.120 00:06:42.120 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:42.120 NOTE: several default settings have changed in version 5.15, please make sure 00:06:42.120 this does not affect your deployments: 00:06:42.120 - DUP for metadata (-m dup) 00:06:42.120 - enabled no-holes (-O no-holes) 00:06:42.120 - enabled free-space-tree (-R free-space-tree) 00:06:42.120 00:06:42.120 Label: (null) 00:06:42.120 UUID: b4ba0cc0-4623-4cdc-8c54-c11418249d61 00:06:42.120 Node size: 16384 00:06:42.120 Sector size: 4096 00:06:42.120 Filesystem size: 510.00MiB 00:06:42.120 Block group profiles: 00:06:42.120 Data: single 8.00MiB 00:06:42.120 Metadata: DUP 32.00MiB 00:06:42.120 System: DUP 8.00MiB 00:06:42.120 SSD detected: yes 00:06:42.120 Zoned device: no 00:06:42.120 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:42.120 Runtime features: free-space-tree 00:06:42.120 Checksum: crc32c 00:06:42.120 Number of devices: 1 00:06:42.120 Devices: 00:06:42.120 ID SIZE PATH 00:06:42.120 1 510.00MiB /dev/nvme0n1p1 00:06:42.120 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 65392 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:42.120 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:42.121 00:06:42.121 real 0m0.254s 00:06:42.121 user 0m0.023s 00:06:42.121 sys 0m0.070s 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:42.121 ************************************ 00:06:42.121 END TEST filesystem_btrfs 00:06:42.121 ************************************ 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:42.121 ************************************ 00:06:42.121 START TEST filesystem_xfs 00:06:42.121 ************************************ 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:42.121 18:19:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:42.379 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:42.379 = sectsz=512 attr=2, projid32bit=1 00:06:42.379 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:42.379 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:42.379 data = bsize=4096 blocks=130560, imaxpct=25 00:06:42.379 = sunit=0 swidth=0 blks 00:06:42.379 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:42.379 log =internal log bsize=4096 blocks=16384, version=2 00:06:42.379 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:42.379 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:42.944 Discarding blocks...Done. 00:06:42.944 18:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:42.944 18:19:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 65392 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:45.471 00:06:45.471 real 0m3.136s 00:06:45.471 user 0m0.022s 00:06:45.471 sys 0m0.050s 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:45.471 ************************************ 00:06:45.471 END TEST filesystem_xfs 00:06:45.471 ************************************ 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:45.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 65392 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 65392 ']' 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 65392 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65392 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:45.471 killing process with pid 65392 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65392' 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 65392 00:06:45.471 [2024-05-13 18:20:01.283511] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:45.471 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 65392 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:46.036 00:06:46.036 real 0m9.251s 00:06:46.036 user 0m34.767s 00:06:46.036 sys 0m1.596s 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.036 ************************************ 00:06:46.036 END TEST nvmf_filesystem_no_in_capsule 00:06:46.036 ************************************ 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.036 ************************************ 00:06:46.036 START TEST nvmf_filesystem_in_capsule 00:06:46.036 ************************************ 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=65711 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 65711 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 65711 ']' 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.036 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:46.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.037 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.037 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:46.037 18:20:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:46.037 [2024-05-13 18:20:01.840591] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:46.037 [2024-05-13 18:20:01.840681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.037 [2024-05-13 18:20:01.976833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:46.295 [2024-05-13 18:20:02.095301] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.295 [2024-05-13 18:20:02.095354] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.295 [2024-05-13 18:20:02.095366] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.295 [2024-05-13 18:20:02.095374] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.295 [2024-05-13 18:20:02.095382] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.295 [2024-05-13 18:20:02.095537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.295 [2024-05-13 18:20:02.096128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.295 [2024-05-13 18:20:02.096276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.295 [2024-05-13 18:20:02.096284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.229 [2024-05-13 18:20:02.934813] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.229 18:20:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.229 Malloc1 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.229 [2024-05-13 18:20:03.119725] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:47.229 [2024-05-13 18:20:03.120001] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.229 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:47.229 { 00:06:47.229 "aliases": [ 00:06:47.229 "1635fa91-b784-47eb-9c3d-4ba4aa7de9aa" 00:06:47.229 ], 00:06:47.229 "assigned_rate_limits": { 00:06:47.229 "r_mbytes_per_sec": 0, 00:06:47.229 "rw_ios_per_sec": 0, 00:06:47.229 "rw_mbytes_per_sec": 0, 00:06:47.229 "w_mbytes_per_sec": 0 00:06:47.229 }, 00:06:47.229 "block_size": 512, 00:06:47.229 "claim_type": "exclusive_write", 00:06:47.229 "claimed": true, 00:06:47.229 "driver_specific": {}, 00:06:47.229 "memory_domains": [ 00:06:47.229 { 00:06:47.229 "dma_device_id": "system", 00:06:47.229 "dma_device_type": 1 00:06:47.229 }, 00:06:47.229 { 00:06:47.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.229 "dma_device_type": 2 00:06:47.229 } 00:06:47.229 ], 00:06:47.229 "name": "Malloc1", 00:06:47.229 "num_blocks": 1048576, 00:06:47.229 "product_name": "Malloc disk", 00:06:47.229 "supported_io_types": { 00:06:47.229 "abort": true, 00:06:47.229 "compare": false, 00:06:47.229 "compare_and_write": false, 00:06:47.229 "flush": true, 00:06:47.229 "nvme_admin": false, 00:06:47.230 "nvme_io": false, 00:06:47.230 "read": true, 00:06:47.230 "reset": true, 00:06:47.230 "unmap": true, 00:06:47.230 "write": true, 00:06:47.230 "write_zeroes": true 00:06:47.230 }, 00:06:47.230 "uuid": "1635fa91-b784-47eb-9c3d-4ba4aa7de9aa", 00:06:47.230 "zoned": false 00:06:47.230 } 00:06:47.230 ]' 00:06:47.230 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:47.486 18:20:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:50.014 18:20:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:50.947 ************************************ 00:06:50.947 START TEST filesystem_in_capsule_ext4 00:06:50.947 ************************************ 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:50.947 mke2fs 1.46.5 (30-Dec-2021) 00:06:50.947 Discarding device blocks: 0/522240 done 00:06:50.947 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:50.947 Filesystem UUID: d81f1463-a4f0-4a7d-ade8-53372ad7e5ab 00:06:50.947 Superblock backups stored on blocks: 00:06:50.947 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:50.947 00:06:50.947 Allocating group tables: 0/64 done 00:06:50.947 Writing inode tables: 0/64 done 00:06:50.947 Creating journal (8192 blocks): done 00:06:50.947 Writing superblocks and filesystem accounting information: 0/64 done 00:06:50.947 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 65711 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:50.947 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.205 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.205 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.205 00:06:51.205 real 0m0.347s 00:06:51.205 user 0m0.019s 00:06:51.206 sys 0m0.051s 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:51.206 ************************************ 00:06:51.206 END TEST filesystem_in_capsule_ext4 00:06:51.206 ************************************ 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.206 ************************************ 00:06:51.206 START TEST filesystem_in_capsule_btrfs 00:06:51.206 ************************************ 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:51.206 18:20:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:51.206 btrfs-progs v6.6.2 00:06:51.206 See https://btrfs.readthedocs.io for more information. 00:06:51.206 00:06:51.206 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:51.206 NOTE: several default settings have changed in version 5.15, please make sure 00:06:51.206 this does not affect your deployments: 00:06:51.206 - DUP for metadata (-m dup) 00:06:51.206 - enabled no-holes (-O no-holes) 00:06:51.206 - enabled free-space-tree (-R free-space-tree) 00:06:51.206 00:06:51.206 Label: (null) 00:06:51.206 UUID: 8b528be8-4b29-4b7a-83b6-9126764a87a1 00:06:51.206 Node size: 16384 00:06:51.206 Sector size: 4096 00:06:51.206 Filesystem size: 510.00MiB 00:06:51.206 Block group profiles: 00:06:51.206 Data: single 8.00MiB 00:06:51.206 Metadata: DUP 32.00MiB 00:06:51.206 System: DUP 8.00MiB 00:06:51.206 SSD detected: yes 00:06:51.206 Zoned device: no 00:06:51.206 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:51.206 Runtime features: free-space-tree 00:06:51.206 Checksum: crc32c 00:06:51.206 Number of devices: 1 00:06:51.206 Devices: 00:06:51.206 ID SIZE PATH 00:06:51.206 1 510.00MiB /dev/nvme0n1p1 00:06:51.206 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:51.206 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 65711 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:51.464 00:06:51.464 real 0m0.229s 00:06:51.464 user 0m0.018s 00:06:51.464 sys 0m0.068s 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:51.464 ************************************ 00:06:51.464 END TEST filesystem_in_capsule_btrfs 00:06:51.464 ************************************ 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:51.464 ************************************ 00:06:51.464 START TEST filesystem_in_capsule_xfs 00:06:51.464 ************************************ 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:51.464 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:51.464 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:51.464 = sectsz=512 attr=2, projid32bit=1 00:06:51.464 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:51.464 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:51.464 data = bsize=4096 blocks=130560, imaxpct=25 00:06:51.464 = sunit=0 swidth=0 blks 00:06:51.464 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:51.465 log =internal log bsize=4096 blocks=16384, version=2 00:06:51.465 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:51.465 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:52.030 Discarding blocks...Done. 00:06:52.030 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:52.030 18:20:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:53.929 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 65711 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:53.930 00:06:53.930 real 0m2.594s 00:06:53.930 user 0m0.022s 00:06:53.930 sys 0m0.052s 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:53.930 ************************************ 00:06:53.930 END TEST filesystem_in_capsule_xfs 00:06:53.930 ************************************ 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:53.930 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:54.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 65711 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 65711 ']' 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 65711 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 65711 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:54.188 killing process with pid 65711 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 65711' 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 65711 00:06:54.188 [2024-05-13 18:20:09.969103] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:54.188 18:20:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 65711 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:54.754 00:06:54.754 real 0m8.640s 00:06:54.754 user 0m32.500s 00:06:54.754 sys 0m1.538s 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:54.754 ************************************ 00:06:54.754 END TEST nvmf_filesystem_in_capsule 00:06:54.754 ************************************ 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:54.754 rmmod nvme_tcp 00:06:54.754 rmmod nvme_fabrics 00:06:54.754 rmmod nvme_keyring 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:54.754 00:06:54.754 real 0m18.714s 00:06:54.754 user 1m7.522s 00:06:54.754 sys 0m3.533s 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.754 ************************************ 00:06:54.754 END TEST nvmf_filesystem 00:06:54.754 18:20:10 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.754 ************************************ 00:06:54.754 18:20:10 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:54.754 18:20:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:54.754 18:20:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:54.754 18:20:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:54.754 ************************************ 00:06:54.754 START TEST nvmf_target_discovery 00:06:54.754 ************************************ 00:06:54.754 18:20:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:55.019 * Looking for test storage... 00:06:55.019 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.019 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.020 18:20:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:55.021 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:55.022 Cannot find device "nvmf_tgt_br" 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@155 -- # true 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.022 Cannot find device "nvmf_tgt_br2" 00:06:55.022 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@156 -- # true 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:55.023 Cannot find device "nvmf_tgt_br" 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@158 -- # true 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:55.023 Cannot find device "nvmf_tgt_br2" 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@159 -- # true 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:55.023 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:55.024 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:55.024 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:55.024 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:55.292 18:20:10 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:55.292 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:55.292 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:55.292 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:55.292 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:55.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:06:55.292 00:06:55.292 --- 10.0.0.2 ping statistics --- 00:06:55.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.292 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:06:55.292 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:55.292 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:55.292 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:06:55.292 00:06:55.292 --- 10.0.0.3 ping statistics --- 00:06:55.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.292 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:06:55.292 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:55.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:06:55.293 00:06:55.293 --- 10.0.0.1 ping statistics --- 00:06:55.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.293 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@433 -- # return 0 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=66162 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 66162 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 66162 ']' 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:55.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:55.293 18:20:11 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:55.293 [2024-05-13 18:20:11.147604] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:55.293 [2024-05-13 18:20:11.147707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.551 [2024-05-13 18:20:11.290325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:55.551 [2024-05-13 18:20:11.419723] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.551 [2024-05-13 18:20:11.419788] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.551 [2024-05-13 18:20:11.419807] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.551 [2024-05-13 18:20:11.419818] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.551 [2024-05-13 18:20:11.419828] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.551 [2024-05-13 18:20:11.419955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.551 [2024-05-13 18:20:11.420676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.551 [2024-05-13 18:20:11.420845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.551 [2024-05-13 18:20:11.420887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.518 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.518 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:56.518 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 [2024-05-13 18:20:12.214692] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 Null1 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 [2024-05-13 18:20:12.275584] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:56.519 [2024-05-13 18:20:12.275973] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 Null2 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 Null3 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 Null4 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.519 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 4420 00:06:56.777 00:06:56.777 Discovery Log Number of Records 6, Generation counter 6 00:06:56.777 =====Discovery Log Entry 0====== 00:06:56.777 trtype: tcp 00:06:56.777 adrfam: ipv4 00:06:56.777 subtype: current discovery subsystem 00:06:56.777 treq: not required 00:06:56.777 portid: 0 00:06:56.777 trsvcid: 4420 00:06:56.777 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:56.777 traddr: 10.0.0.2 00:06:56.777 eflags: explicit discovery connections, duplicate discovery information 00:06:56.777 sectype: none 00:06:56.777 =====Discovery Log Entry 1====== 00:06:56.777 trtype: tcp 00:06:56.777 adrfam: ipv4 00:06:56.777 subtype: nvme subsystem 00:06:56.777 treq: not required 00:06:56.777 portid: 0 00:06:56.777 trsvcid: 4420 00:06:56.777 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:56.777 traddr: 10.0.0.2 00:06:56.777 eflags: none 00:06:56.777 sectype: none 00:06:56.777 =====Discovery Log Entry 2====== 00:06:56.777 trtype: tcp 00:06:56.777 adrfam: ipv4 00:06:56.777 subtype: nvme subsystem 00:06:56.777 treq: not required 00:06:56.777 portid: 0 00:06:56.777 trsvcid: 4420 00:06:56.777 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:56.777 traddr: 10.0.0.2 00:06:56.777 eflags: none 00:06:56.777 sectype: none 00:06:56.777 =====Discovery Log Entry 3====== 00:06:56.777 trtype: tcp 00:06:56.777 adrfam: ipv4 00:06:56.777 subtype: nvme subsystem 00:06:56.777 treq: not required 00:06:56.777 portid: 0 00:06:56.777 trsvcid: 4420 00:06:56.777 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:56.777 traddr: 10.0.0.2 00:06:56.777 eflags: none 00:06:56.777 sectype: none 00:06:56.777 =====Discovery Log Entry 4====== 00:06:56.777 trtype: tcp 00:06:56.777 adrfam: ipv4 00:06:56.777 subtype: nvme subsystem 00:06:56.777 treq: not required 00:06:56.777 portid: 0 00:06:56.777 trsvcid: 4420 00:06:56.777 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:56.777 traddr: 10.0.0.2 00:06:56.777 eflags: none 00:06:56.777 sectype: none 00:06:56.777 =====Discovery Log Entry 5====== 00:06:56.777 trtype: tcp 00:06:56.777 adrfam: ipv4 00:06:56.777 subtype: discovery subsystem referral 00:06:56.777 treq: not required 00:06:56.777 portid: 0 00:06:56.777 trsvcid: 4430 00:06:56.777 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:56.777 traddr: 10.0.0.2 00:06:56.777 eflags: none 00:06:56.777 sectype: none 00:06:56.777 Perform nvmf subsystem discovery via RPC 00:06:56.777 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:56.777 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:56.777 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.777 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.777 [ 00:06:56.777 { 00:06:56.777 "allow_any_host": true, 00:06:56.777 "hosts": [], 00:06:56.777 "listen_addresses": [ 00:06:56.777 { 00:06:56.777 "adrfam": "IPv4", 00:06:56.777 "traddr": "10.0.0.2", 00:06:56.777 "trsvcid": "4420", 00:06:56.777 "trtype": "TCP" 00:06:56.777 } 00:06:56.777 ], 00:06:56.777 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:56.777 "subtype": "Discovery" 00:06:56.777 }, 00:06:56.777 { 00:06:56.777 "allow_any_host": true, 00:06:56.777 "hosts": [], 00:06:56.777 "listen_addresses": [ 00:06:56.777 { 00:06:56.777 "adrfam": "IPv4", 00:06:56.777 "traddr": "10.0.0.2", 00:06:56.777 "trsvcid": "4420", 00:06:56.777 "trtype": "TCP" 00:06:56.777 } 00:06:56.777 ], 00:06:56.777 "max_cntlid": 65519, 00:06:56.777 "max_namespaces": 32, 00:06:56.777 "min_cntlid": 1, 00:06:56.777 "model_number": "SPDK bdev Controller", 00:06:56.777 "namespaces": [ 00:06:56.777 { 00:06:56.777 "bdev_name": "Null1", 00:06:56.777 "name": "Null1", 00:06:56.777 "nguid": "E951E0CB5E3D42A799AE01C3EEFFC929", 00:06:56.777 "nsid": 1, 00:06:56.777 "uuid": "e951e0cb-5e3d-42a7-99ae-01c3eeffc929" 00:06:56.777 } 00:06:56.777 ], 00:06:56.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:56.777 "serial_number": "SPDK00000000000001", 00:06:56.777 "subtype": "NVMe" 00:06:56.777 }, 00:06:56.777 { 00:06:56.777 "allow_any_host": true, 00:06:56.777 "hosts": [], 00:06:56.777 "listen_addresses": [ 00:06:56.777 { 00:06:56.777 "adrfam": "IPv4", 00:06:56.777 "traddr": "10.0.0.2", 00:06:56.777 "trsvcid": "4420", 00:06:56.777 "trtype": "TCP" 00:06:56.777 } 00:06:56.777 ], 00:06:56.777 "max_cntlid": 65519, 00:06:56.777 "max_namespaces": 32, 00:06:56.777 "min_cntlid": 1, 00:06:56.777 "model_number": "SPDK bdev Controller", 00:06:56.777 "namespaces": [ 00:06:56.777 { 00:06:56.777 "bdev_name": "Null2", 00:06:56.777 "name": "Null2", 00:06:56.777 "nguid": "CAC21B13471F459796AA1DCE46401BF6", 00:06:56.777 "nsid": 1, 00:06:56.777 "uuid": "cac21b13-471f-4597-96aa-1dce46401bf6" 00:06:56.777 } 00:06:56.777 ], 00:06:56.777 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:56.777 "serial_number": "SPDK00000000000002", 00:06:56.777 "subtype": "NVMe" 00:06:56.777 }, 00:06:56.777 { 00:06:56.777 "allow_any_host": true, 00:06:56.777 "hosts": [], 00:06:56.777 "listen_addresses": [ 00:06:56.777 { 00:06:56.777 "adrfam": "IPv4", 00:06:56.777 "traddr": "10.0.0.2", 00:06:56.777 "trsvcid": "4420", 00:06:56.777 "trtype": "TCP" 00:06:56.777 } 00:06:56.777 ], 00:06:56.777 "max_cntlid": 65519, 00:06:56.777 "max_namespaces": 32, 00:06:56.777 "min_cntlid": 1, 00:06:56.778 "model_number": "SPDK bdev Controller", 00:06:56.778 "namespaces": [ 00:06:56.778 { 00:06:56.778 "bdev_name": "Null3", 00:06:56.778 "name": "Null3", 00:06:56.778 "nguid": "F8C1D2AB6DA049DA9725DAE8C6ED6565", 00:06:56.778 "nsid": 1, 00:06:56.778 "uuid": "f8c1d2ab-6da0-49da-9725-dae8c6ed6565" 00:06:56.778 } 00:06:56.778 ], 00:06:56.778 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:56.778 "serial_number": "SPDK00000000000003", 00:06:56.778 "subtype": "NVMe" 00:06:56.778 }, 00:06:56.778 { 00:06:56.778 "allow_any_host": true, 00:06:56.778 "hosts": [], 00:06:56.778 "listen_addresses": [ 00:06:56.778 { 00:06:56.778 "adrfam": "IPv4", 00:06:56.778 "traddr": "10.0.0.2", 00:06:56.778 "trsvcid": "4420", 00:06:56.778 "trtype": "TCP" 00:06:56.778 } 00:06:56.778 ], 00:06:56.778 "max_cntlid": 65519, 00:06:56.778 "max_namespaces": 32, 00:06:56.778 "min_cntlid": 1, 00:06:56.778 "model_number": "SPDK bdev Controller", 00:06:56.778 "namespaces": [ 00:06:56.778 { 00:06:56.778 "bdev_name": "Null4", 00:06:56.778 "name": "Null4", 00:06:56.778 "nguid": "9A43820DBE3F4D23B6A881F74EFE349D", 00:06:56.778 "nsid": 1, 00:06:56.778 "uuid": "9a43820d-be3f-4d23-b6a8-81f74efe349d" 00:06:56.778 } 00:06:56.778 ], 00:06:56.778 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:56.778 "serial_number": "SPDK00000000000004", 00:06:56.778 "subtype": "NVMe" 00:06:56.778 } 00:06:56.778 ] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:56.778 rmmod nvme_tcp 00:06:56.778 rmmod nvme_fabrics 00:06:56.778 rmmod nvme_keyring 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 66162 ']' 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 66162 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 66162 ']' 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 66162 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:56.778 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66162 00:06:57.035 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:57.035 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:57.035 killing process with pid 66162 00:06:57.035 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66162' 00:06:57.035 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 66162 00:06:57.035 [2024-05-13 18:20:12.728116] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:57.035 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 66162 00:06:57.294 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:57.294 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:57.294 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:57.295 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:57.295 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:57.295 18:20:12 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.295 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.295 18:20:12 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.295 18:20:13 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:06:57.295 00:06:57.295 real 0m2.400s 00:06:57.295 user 0m6.476s 00:06:57.295 sys 0m0.634s 00:06:57.295 18:20:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:57.295 18:20:13 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:57.295 ************************************ 00:06:57.295 END TEST nvmf_target_discovery 00:06:57.295 ************************************ 00:06:57.295 18:20:13 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:57.295 18:20:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:57.295 18:20:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:57.295 18:20:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:57.295 ************************************ 00:06:57.295 START TEST nvmf_referrals 00:06:57.295 ************************************ 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:57.295 * Looking for test storage... 00:06:57.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@432 -- # nvmf_veth_init 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:06:57.295 Cannot find device "nvmf_tgt_br" 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@155 -- # true 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:06:57.295 Cannot find device "nvmf_tgt_br2" 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@156 -- # true 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:06:57.295 Cannot find device "nvmf_tgt_br" 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@158 -- # true 00:06:57.295 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:06:57.554 Cannot find device "nvmf_tgt_br2" 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@159 -- # true 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:57.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:57.554 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:57.554 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:06:57.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:06:57.811 00:06:57.811 --- 10.0.0.2 ping statistics --- 00:06:57.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.811 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:06:57.811 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:57.811 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:06:57.811 00:06:57.811 --- 10.0.0.3 ping statistics --- 00:06:57.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.811 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:57.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:06:57.811 00:06:57.811 --- 10.0.0.1 ping statistics --- 00:06:57.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.811 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@433 -- # return 0 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=66385 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 66385 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 66385 ']' 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.811 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.812 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.812 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.812 18:20:13 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:57.812 [2024-05-13 18:20:13.624670] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:06:57.812 [2024-05-13 18:20:13.624784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.069 [2024-05-13 18:20:13.766029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.069 [2024-05-13 18:20:13.917184] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.069 [2024-05-13 18:20:13.917241] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.069 [2024-05-13 18:20:13.917255] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.069 [2024-05-13 18:20:13.917266] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.069 [2024-05-13 18:20:13.917275] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.069 [2024-05-13 18:20:13.917387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.069 [2024-05-13 18:20:13.919305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.069 [2024-05-13 18:20:13.919460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.069 [2024-05-13 18:20:13.919464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 [2024-05-13 18:20:14.745174] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 [2024-05-13 18:20:14.773873] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:59.000 [2024-05-13 18:20:14.774156] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:59.000 18:20:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:59.257 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.258 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:59.515 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:59.773 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:00.031 rmmod nvme_tcp 00:07:00.031 rmmod nvme_fabrics 00:07:00.031 rmmod nvme_keyring 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:00.031 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:00.032 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:00.032 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 66385 ']' 00:07:00.032 18:20:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 66385 00:07:00.032 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 66385 ']' 00:07:00.032 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 66385 00:07:00.032 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:07:00.289 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:00.289 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66385 00:07:00.289 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:00.289 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:00.289 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66385' 00:07:00.289 killing process with pid 66385 00:07:00.289 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 66385 00:07:00.289 [2024-05-13 18:20:15.995934] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:00.289 18:20:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 66385 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:07:00.547 00:07:00.547 real 0m3.216s 00:07:00.547 user 0m10.307s 00:07:00.547 sys 0m0.870s 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:00.547 18:20:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:00.547 ************************************ 00:07:00.547 END TEST nvmf_referrals 00:07:00.547 ************************************ 00:07:00.547 18:20:16 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:00.547 18:20:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:00.548 18:20:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:00.548 18:20:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.548 ************************************ 00:07:00.548 START TEST nvmf_connect_disconnect 00:07:00.548 ************************************ 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:00.548 * Looking for test storage... 00:07:00.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # nvmf_veth_init 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:07:00.548 Cannot find device "nvmf_tgt_br" 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # true 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:07:00.548 Cannot find device "nvmf_tgt_br2" 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # true 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:07:00.548 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:07:00.806 Cannot find device "nvmf_tgt_br" 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # true 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:07:00.806 Cannot find device "nvmf_tgt_br2" 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # true 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:00.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:00.806 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:00.806 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:07:01.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:07:01.063 00:07:01.063 --- 10.0.0.2 ping statistics --- 00:07:01.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.063 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:07:01.063 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:01.063 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:07:01.063 00:07:01.063 --- 10.0.0.3 ping statistics --- 00:07:01.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.063 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:01.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:07:01.063 00:07:01.063 --- 10.0.0.1 ping statistics --- 00:07:01.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.063 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@433 -- # return 0 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=66687 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 66687 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 66687 ']' 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.063 18:20:16 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:01.063 [2024-05-13 18:20:16.866142] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:07:01.063 [2024-05-13 18:20:16.866837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.320 [2024-05-13 18:20:17.015034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.320 [2024-05-13 18:20:17.137032] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.320 [2024-05-13 18:20:17.137093] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.320 [2024-05-13 18:20:17.137107] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.320 [2024-05-13 18:20:17.137118] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.320 [2024-05-13 18:20:17.137130] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.320 [2024-05-13 18:20:17.137287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.320 [2024-05-13 18:20:17.138093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.320 [2024-05-13 18:20:17.138241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.320 [2024-05-13 18:20:17.138252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 [2024-05-13 18:20:17.881787] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:02.253 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:07:02.254 [2024-05-13 18:20:17.952690] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:02.254 [2024-05-13 18:20:17.952970] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.254 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:02.254 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:02.254 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:02.254 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:02.254 18:20:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:07:04.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:06.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:09.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:11.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:13.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:20.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:28.877 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.352 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:42.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:44.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:49.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:51.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:53.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:56.139 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:00.562 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.982 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:16.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.313 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.657 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.226 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:53.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.916 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.775 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:25.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:32.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.613 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.427 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.894 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:46.326 rmmod nvme_tcp 00:10:46.326 rmmod nvme_fabrics 00:10:46.326 rmmod nvme_keyring 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 66687 ']' 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 66687 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 66687 ']' 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 66687 00:10:46.326 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:10:46.327 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:46.327 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 66687 00:10:46.327 killing process with pid 66687 00:10:46.327 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:46.327 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:46.327 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 66687' 00:10:46.327 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 66687 00:10:46.327 [2024-05-13 18:24:01.752817] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:46.327 18:24:01 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 66687 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:46.327 00:10:46.327 real 3m45.734s 00:10:46.327 user 14m35.996s 00:10:46.327 sys 0m24.728s 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:46.327 ************************************ 00:10:46.327 END TEST nvmf_connect_disconnect 00:10:46.327 ************************************ 00:10:46.327 18:24:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:46.327 18:24:02 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:46.327 18:24:02 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:46.327 18:24:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:46.327 18:24:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:46.327 ************************************ 00:10:46.327 START TEST nvmf_multitarget 00:10:46.327 ************************************ 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:46.327 * Looking for test storage... 00:10:46.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:46.327 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:46.586 Cannot find device "nvmf_tgt_br" 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@155 -- # true 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:46.586 Cannot find device "nvmf_tgt_br2" 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@156 -- # true 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:46.586 Cannot find device "nvmf_tgt_br" 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@158 -- # true 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:46.586 Cannot find device "nvmf_tgt_br2" 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@159 -- # true 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:46.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:46.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:46.586 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:46.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:10:46.845 00:10:46.845 --- 10.0.0.2 ping statistics --- 00:10:46.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.845 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:46.845 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:46.845 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:10:46.845 00:10:46.845 --- 10.0.0.3 ping statistics --- 00:10:46.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.845 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:46.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:46.845 00:10:46.845 --- 10.0.0.1 ping statistics --- 00:10:46.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.845 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@433 -- # return 0 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=70460 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 70460 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 70460 ']' 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:46.845 18:24:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:46.845 [2024-05-13 18:24:02.648530] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:10:46.845 [2024-05-13 18:24:02.648644] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.103 [2024-05-13 18:24:02.784593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.103 [2024-05-13 18:24:02.903763] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.103 [2024-05-13 18:24:02.904029] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.103 [2024-05-13 18:24:02.904185] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.103 [2024-05-13 18:24:02.904239] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.103 [2024-05-13 18:24:02.904328] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.103 [2024-05-13 18:24:02.904518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.103 [2024-05-13 18:24:02.904844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.103 [2024-05-13 18:24:02.904930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.103 [2024-05-13 18:24:02.904936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:48.035 18:24:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:48.291 "nvmf_tgt_1" 00:10:48.291 18:24:03 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:48.291 "nvmf_tgt_2" 00:10:48.291 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:48.291 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:48.549 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:48.549 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:48.549 true 00:10:48.549 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:48.549 true 00:10:48.549 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:48.549 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.807 rmmod nvme_tcp 00:10:48.807 rmmod nvme_fabrics 00:10:48.807 rmmod nvme_keyring 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 70460 ']' 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 70460 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 70460 ']' 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 70460 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70460 00:10:48.807 killing process with pid 70460 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70460' 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 70460 00:10:48.807 18:24:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 70460 00:10:49.066 18:24:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.066 18:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.066 18:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.066 18:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.066 18:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.066 18:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.066 18:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.066 18:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.325 18:24:05 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:49.325 ************************************ 00:10:49.325 END TEST nvmf_multitarget 00:10:49.325 ************************************ 00:10:49.325 00:10:49.325 real 0m2.918s 00:10:49.325 user 0m9.444s 00:10:49.325 sys 0m0.676s 00:10:49.325 18:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:49.325 18:24:05 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:49.325 18:24:05 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:49.325 18:24:05 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:49.325 18:24:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:49.325 18:24:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:49.325 ************************************ 00:10:49.325 START TEST nvmf_rpc 00:10:49.325 ************************************ 00:10:49.325 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:49.325 * Looking for test storage... 00:10:49.325 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.325 18:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.325 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:49.325 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.325 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.325 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.325 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:49.326 Cannot find device "nvmf_tgt_br" 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@155 -- # true 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.326 Cannot find device "nvmf_tgt_br2" 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@156 -- # true 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:49.326 Cannot find device "nvmf_tgt_br" 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@158 -- # true 00:10:49.326 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:49.584 Cannot find device "nvmf_tgt_br2" 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@159 -- # true 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.584 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:10:49.584 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:49.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:10:49.585 00:10:49.585 --- 10.0.0.2 ping statistics --- 00:10:49.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.585 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:49.585 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.585 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:10:49.585 00:10:49.585 --- 10.0.0.3 ping statistics --- 00:10:49.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.585 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:49.585 00:10:49.585 --- 10.0.0.1 ping statistics --- 00:10:49.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.585 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@433 -- # return 0 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:49.585 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=70697 00:10:49.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 70697 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 70697 ']' 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:49.843 18:24:05 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.843 [2024-05-13 18:24:05.612675] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:10:49.843 [2024-05-13 18:24:05.612776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.843 [2024-05-13 18:24:05.754717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.101 [2024-05-13 18:24:05.885549] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.101 [2024-05-13 18:24:05.885885] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.101 [2024-05-13 18:24:05.886042] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.101 [2024-05-13 18:24:05.886182] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.101 [2024-05-13 18:24:05.886223] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.101 [2024-05-13 18:24:05.886487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.101 [2024-05-13 18:24:05.887745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.101 [2024-05-13 18:24:05.887825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.101 [2024-05-13 18:24:05.887833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.667 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:50.667 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:10:50.667 18:24:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:50.667 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:50.668 "poll_groups": [ 00:10:50.668 { 00:10:50.668 "admin_qpairs": 0, 00:10:50.668 "completed_nvme_io": 0, 00:10:50.668 "current_admin_qpairs": 0, 00:10:50.668 "current_io_qpairs": 0, 00:10:50.668 "io_qpairs": 0, 00:10:50.668 "name": "nvmf_tgt_poll_group_000", 00:10:50.668 "pending_bdev_io": 0, 00:10:50.668 "transports": [] 00:10:50.668 }, 00:10:50.668 { 00:10:50.668 "admin_qpairs": 0, 00:10:50.668 "completed_nvme_io": 0, 00:10:50.668 "current_admin_qpairs": 0, 00:10:50.668 "current_io_qpairs": 0, 00:10:50.668 "io_qpairs": 0, 00:10:50.668 "name": "nvmf_tgt_poll_group_001", 00:10:50.668 "pending_bdev_io": 0, 00:10:50.668 "transports": [] 00:10:50.668 }, 00:10:50.668 { 00:10:50.668 "admin_qpairs": 0, 00:10:50.668 "completed_nvme_io": 0, 00:10:50.668 "current_admin_qpairs": 0, 00:10:50.668 "current_io_qpairs": 0, 00:10:50.668 "io_qpairs": 0, 00:10:50.668 "name": "nvmf_tgt_poll_group_002", 00:10:50.668 "pending_bdev_io": 0, 00:10:50.668 "transports": [] 00:10:50.668 }, 00:10:50.668 { 00:10:50.668 "admin_qpairs": 0, 00:10:50.668 "completed_nvme_io": 0, 00:10:50.668 "current_admin_qpairs": 0, 00:10:50.668 "current_io_qpairs": 0, 00:10:50.668 "io_qpairs": 0, 00:10:50.668 "name": "nvmf_tgt_poll_group_003", 00:10:50.668 "pending_bdev_io": 0, 00:10:50.668 "transports": [] 00:10:50.668 } 00:10:50.668 ], 00:10:50.668 "tick_rate": 2200000000 00:10:50.668 }' 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:50.668 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.926 [2024-05-13 18:24:06.698148] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.926 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:50.927 "poll_groups": [ 00:10:50.927 { 00:10:50.927 "admin_qpairs": 0, 00:10:50.927 "completed_nvme_io": 0, 00:10:50.927 "current_admin_qpairs": 0, 00:10:50.927 "current_io_qpairs": 0, 00:10:50.927 "io_qpairs": 0, 00:10:50.927 "name": "nvmf_tgt_poll_group_000", 00:10:50.927 "pending_bdev_io": 0, 00:10:50.927 "transports": [ 00:10:50.927 { 00:10:50.927 "trtype": "TCP" 00:10:50.927 } 00:10:50.927 ] 00:10:50.927 }, 00:10:50.927 { 00:10:50.927 "admin_qpairs": 0, 00:10:50.927 "completed_nvme_io": 0, 00:10:50.927 "current_admin_qpairs": 0, 00:10:50.927 "current_io_qpairs": 0, 00:10:50.927 "io_qpairs": 0, 00:10:50.927 "name": "nvmf_tgt_poll_group_001", 00:10:50.927 "pending_bdev_io": 0, 00:10:50.927 "transports": [ 00:10:50.927 { 00:10:50.927 "trtype": "TCP" 00:10:50.927 } 00:10:50.927 ] 00:10:50.927 }, 00:10:50.927 { 00:10:50.927 "admin_qpairs": 0, 00:10:50.927 "completed_nvme_io": 0, 00:10:50.927 "current_admin_qpairs": 0, 00:10:50.927 "current_io_qpairs": 0, 00:10:50.927 "io_qpairs": 0, 00:10:50.927 "name": "nvmf_tgt_poll_group_002", 00:10:50.927 "pending_bdev_io": 0, 00:10:50.927 "transports": [ 00:10:50.927 { 00:10:50.927 "trtype": "TCP" 00:10:50.927 } 00:10:50.927 ] 00:10:50.927 }, 00:10:50.927 { 00:10:50.927 "admin_qpairs": 0, 00:10:50.927 "completed_nvme_io": 0, 00:10:50.927 "current_admin_qpairs": 0, 00:10:50.927 "current_io_qpairs": 0, 00:10:50.927 "io_qpairs": 0, 00:10:50.927 "name": "nvmf_tgt_poll_group_003", 00:10:50.927 "pending_bdev_io": 0, 00:10:50.927 "transports": [ 00:10:50.927 { 00:10:50.927 "trtype": "TCP" 00:10:50.927 } 00:10:50.927 ] 00:10:50.927 } 00:10:50.927 ], 00:10:50.927 "tick_rate": 2200000000 00:10:50.927 }' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.927 Malloc1 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:50.927 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.185 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.185 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.185 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.185 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.185 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.186 [2024-05-13 18:24:06.880567] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:51.186 [2024-05-13 18:24:06.881018] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 -a 10.0.0.2 -s 4420 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 -a 10.0.0.2 -s 4420 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 -a 10.0.0.2 -s 4420 00:10:51.186 [2024-05-13 18:24:06.903059] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61' 00:10:51.186 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:51.186 could not add new controller: failed to write to nvme-fabrics device 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:51.186 18:24:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:51.186 18:24:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:51.186 18:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:51.186 18:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:51.186 18:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:51.186 18:24:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.716 [2024-05-13 18:24:09.194184] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61' 00:10:53.716 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:53.716 could not add new controller: failed to write to nvme-fabrics device 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:53.716 18:24:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.665 [2024-05-13 18:24:11.480047] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.665 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.924 18:24:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.924 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:55.924 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.924 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:55.924 18:24:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:10:57.841 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:10:57.841 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:10:57.841 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.841 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:10:57.841 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.841 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:10:57.841 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:58.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 [2024-05-13 18:24:13.895000] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:58.144 18:24:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.144 18:24:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.144 18:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:10:58.144 18:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.144 18:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:10:58.144 18:24:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 [2024-05-13 18:24:16.298343] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.677 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.678 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.678 18:24:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.678 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:00.678 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.678 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:00.678 18:24:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.645 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.903 [2024-05-13 18:24:18.593514] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.903 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:02.904 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.904 18:24:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:02.904 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:02.904 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:02.904 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:02.904 18:24:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 [2024-05-13 18:24:20.891143] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:05.434 18:24:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.434 18:24:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.434 18:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:11:05.434 18:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.435 18:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:11:05.435 18:24:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.334 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 [2024-05-13 18:24:23.300112] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 [2024-05-13 18:24:23.348116] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.593 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 [2024-05-13 18:24:23.396201] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 [2024-05-13 18:24:23.444264] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 [2024-05-13 18:24:23.492298] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.594 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:07.853 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.853 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:07.853 "poll_groups": [ 00:11:07.853 { 00:11:07.853 "admin_qpairs": 2, 00:11:07.853 "completed_nvme_io": 68, 00:11:07.853 "current_admin_qpairs": 0, 00:11:07.853 "current_io_qpairs": 0, 00:11:07.853 "io_qpairs": 16, 00:11:07.853 "name": "nvmf_tgt_poll_group_000", 00:11:07.853 "pending_bdev_io": 0, 00:11:07.853 "transports": [ 00:11:07.853 { 00:11:07.853 "trtype": "TCP" 00:11:07.853 } 00:11:07.853 ] 00:11:07.853 }, 00:11:07.853 { 00:11:07.853 "admin_qpairs": 3, 00:11:07.853 "completed_nvme_io": 69, 00:11:07.853 "current_admin_qpairs": 0, 00:11:07.853 "current_io_qpairs": 0, 00:11:07.853 "io_qpairs": 17, 00:11:07.853 "name": "nvmf_tgt_poll_group_001", 00:11:07.853 "pending_bdev_io": 0, 00:11:07.853 "transports": [ 00:11:07.853 { 00:11:07.853 "trtype": "TCP" 00:11:07.853 } 00:11:07.853 ] 00:11:07.853 }, 00:11:07.853 { 00:11:07.853 "admin_qpairs": 1, 00:11:07.853 "completed_nvme_io": 136, 00:11:07.853 "current_admin_qpairs": 0, 00:11:07.853 "current_io_qpairs": 0, 00:11:07.853 "io_qpairs": 19, 00:11:07.853 "name": "nvmf_tgt_poll_group_002", 00:11:07.853 "pending_bdev_io": 0, 00:11:07.853 "transports": [ 00:11:07.853 { 00:11:07.853 "trtype": "TCP" 00:11:07.853 } 00:11:07.853 ] 00:11:07.853 }, 00:11:07.853 { 00:11:07.853 "admin_qpairs": 1, 00:11:07.853 "completed_nvme_io": 147, 00:11:07.853 "current_admin_qpairs": 0, 00:11:07.853 "current_io_qpairs": 0, 00:11:07.853 "io_qpairs": 18, 00:11:07.853 "name": "nvmf_tgt_poll_group_003", 00:11:07.853 "pending_bdev_io": 0, 00:11:07.853 "transports": [ 00:11:07.853 { 00:11:07.853 "trtype": "TCP" 00:11:07.853 } 00:11:07.853 ] 00:11:07.853 } 00:11:07.853 ], 00:11:07.853 "tick_rate": 2200000000 00:11:07.854 }' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.854 rmmod nvme_tcp 00:11:07.854 rmmod nvme_fabrics 00:11:07.854 rmmod nvme_keyring 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 70697 ']' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 70697 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 70697 ']' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 70697 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 70697 00:11:07.854 killing process with pid 70697 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 70697' 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 70697 00:11:07.854 [2024-05-13 18:24:23.771994] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:07.854 18:24:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 70697 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:08.419 00:11:08.419 real 0m19.134s 00:11:08.419 user 1m11.363s 00:11:08.419 sys 0m2.534s 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:08.419 18:24:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.419 ************************************ 00:11:08.419 END TEST nvmf_rpc 00:11:08.419 ************************************ 00:11:08.420 18:24:24 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:08.420 18:24:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:08.420 18:24:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:08.420 18:24:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.420 ************************************ 00:11:08.420 START TEST nvmf_invalid 00:11:08.420 ************************************ 00:11:08.420 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:08.685 * Looking for test storage... 00:11:08.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:08.685 Cannot find device "nvmf_tgt_br" 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@155 -- # true 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.685 Cannot find device "nvmf_tgt_br2" 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@156 -- # true 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:08.685 Cannot find device "nvmf_tgt_br" 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@158 -- # true 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:08.685 Cannot find device "nvmf_tgt_br2" 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@159 -- # true 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.685 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.685 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:08.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:11:08.961 00:11:08.961 --- 10.0.0.2 ping statistics --- 00:11:08.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.961 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:08.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:11:08.961 00:11:08.961 --- 10.0.0.3 ping statistics --- 00:11:08.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.961 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:11:08.961 00:11:08.961 --- 10.0.0.1 ping statistics --- 00:11:08.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.961 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@433 -- # return 0 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=71211 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 71211 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 71211 ']' 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:08.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:08.961 18:24:24 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:08.961 [2024-05-13 18:24:24.839679] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:11:08.961 [2024-05-13 18:24:24.839776] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.220 [2024-05-13 18:24:24.977855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.220 [2024-05-13 18:24:25.137065] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.220 [2024-05-13 18:24:25.137451] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.220 [2024-05-13 18:24:25.137741] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.220 [2024-05-13 18:24:25.137964] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.220 [2024-05-13 18:24:25.138100] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.220 [2024-05-13 18:24:25.138370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.220 [2024-05-13 18:24:25.138594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.220 [2024-05-13 18:24:25.138604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.220 [2024-05-13 18:24:25.138470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:10.156 18:24:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12924 00:11:10.415 [2024-05-13 18:24:26.218717] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:10.416 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/05/13 18:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12924 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:10.416 request: 00:11:10.416 { 00:11:10.416 "method": "nvmf_create_subsystem", 00:11:10.416 "params": { 00:11:10.416 "nqn": "nqn.2016-06.io.spdk:cnode12924", 00:11:10.416 "tgt_name": "foobar" 00:11:10.416 } 00:11:10.416 } 00:11:10.416 Got JSON-RPC error response 00:11:10.416 GoRPCClient: error on JSON-RPC call' 00:11:10.416 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/05/13 18:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12924 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:11:10.416 request: 00:11:10.416 { 00:11:10.416 "method": "nvmf_create_subsystem", 00:11:10.416 "params": { 00:11:10.416 "nqn": "nqn.2016-06.io.spdk:cnode12924", 00:11:10.416 "tgt_name": "foobar" 00:11:10.416 } 00:11:10.416 } 00:11:10.416 Got JSON-RPC error response 00:11:10.416 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:10.416 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:10.416 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode24664 00:11:10.674 [2024-05-13 18:24:26.531382] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24664: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:10.674 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/05/13 18:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24664 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:10.674 request: 00:11:10.674 { 00:11:10.674 "method": "nvmf_create_subsystem", 00:11:10.674 "params": { 00:11:10.674 "nqn": "nqn.2016-06.io.spdk:cnode24664", 00:11:10.674 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:10.674 } 00:11:10.674 } 00:11:10.674 Got JSON-RPC error response 00:11:10.674 GoRPCClient: error on JSON-RPC call' 00:11:10.674 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/05/13 18:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode24664 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:11:10.674 request: 00:11:10.674 { 00:11:10.674 "method": "nvmf_create_subsystem", 00:11:10.674 "params": { 00:11:10.674 "nqn": "nqn.2016-06.io.spdk:cnode24664", 00:11:10.674 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:11:10.674 } 00:11:10.674 } 00:11:10.674 Got JSON-RPC error response 00:11:10.674 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:10.674 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:10.674 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16225 00:11:10.933 [2024-05-13 18:24:26.815765] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16225: invalid model number 'SPDK_Controller' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/05/13 18:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16225], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:10.933 request: 00:11:10.933 { 00:11:10.933 "method": "nvmf_create_subsystem", 00:11:10.933 "params": { 00:11:10.933 "nqn": "nqn.2016-06.io.spdk:cnode16225", 00:11:10.933 "model_number": "SPDK_Controller\u001f" 00:11:10.933 } 00:11:10.933 } 00:11:10.933 Got JSON-RPC error response 00:11:10.933 GoRPCClient: error on JSON-RPC call' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/05/13 18:24:26 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16225], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:11:10.933 request: 00:11:10.933 { 00:11:10.933 "method": "nvmf_create_subsystem", 00:11:10.933 "params": { 00:11:10.933 "nqn": "nqn.2016-06.io.spdk:cnode16225", 00:11:10.933 "model_number": "SPDK_Controller\u001f" 00:11:10.933 } 00:11:10.933 } 00:11:10.933 Got JSON-RPC error response 00:11:10.933 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:10.933 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:10.934 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:11.193 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ' == \- ]] 00:11:11.194 18:24:26 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ''\'']*/1Uvyj /dev/null' 00:11:14.890 18:24:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.890 18:24:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:14.890 00:11:14.890 real 0m6.377s 00:11:14.890 user 0m25.161s 00:11:14.890 sys 0m1.406s 00:11:14.890 18:24:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.890 18:24:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:14.890 ************************************ 00:11:14.890 END TEST nvmf_invalid 00:11:14.890 ************************************ 00:11:14.890 18:24:30 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:14.890 18:24:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:14.890 18:24:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.890 18:24:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:14.890 ************************************ 00:11:14.890 START TEST nvmf_abort 00:11:14.890 ************************************ 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:14.890 * Looking for test storage... 00:11:14.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:14.890 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:15.149 Cannot find device "nvmf_tgt_br" 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@155 -- # true 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:15.149 Cannot find device "nvmf_tgt_br2" 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@156 -- # true 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:15.149 Cannot find device "nvmf_tgt_br" 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@158 -- # true 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:15.149 Cannot find device "nvmf_tgt_br2" 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@159 -- # true 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:15.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@162 -- # true 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:15.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@163 -- # true 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:15.149 18:24:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:15.149 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:15.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:11:15.408 00:11:15.408 --- 10.0.0.2 ping statistics --- 00:11:15.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.408 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:15.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:11:15.408 00:11:15.408 --- 10.0.0.3 ping statistics --- 00:11:15.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.408 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:15.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:11:15.408 00:11:15.408 --- 10.0.0.1 ping statistics --- 00:11:15.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.408 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@433 -- # return 0 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=71721 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 71721 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 71721 ']' 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:15.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:15.408 18:24:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:15.408 [2024-05-13 18:24:31.214180] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:11:15.408 [2024-05-13 18:24:31.214280] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.666 [2024-05-13 18:24:31.353606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:15.666 [2024-05-13 18:24:31.470375] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.666 [2024-05-13 18:24:31.470439] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.666 [2024-05-13 18:24:31.470450] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.666 [2024-05-13 18:24:31.470458] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.666 [2024-05-13 18:24:31.470464] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.666 [2024-05-13 18:24:31.470631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.666 [2024-05-13 18:24:31.471123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.666 [2024-05-13 18:24:31.471137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.277 [2024-05-13 18:24:32.193377] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.277 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 Malloc0 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 Delay0 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 [2024-05-13 18:24:32.265287] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:16.535 [2024-05-13 18:24:32.265565] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.535 18:24:32 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:16.535 [2024-05-13 18:24:32.451666] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:19.063 Initializing NVMe Controllers 00:11:19.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:19.063 controller IO queue size 128 less than required 00:11:19.063 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:19.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:19.063 Initialization complete. Launching workers. 00:11:19.063 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32804 00:11:19.063 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32865, failed to submit 62 00:11:19.063 success 32808, unsuccess 57, failed 0 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.063 rmmod nvme_tcp 00:11:19.063 rmmod nvme_fabrics 00:11:19.063 rmmod nvme_keyring 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 71721 ']' 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 71721 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 71721 ']' 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 71721 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71721 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71721' 00:11:19.063 killing process with pid 71721 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 71721 00:11:19.063 [2024-05-13 18:24:34.617116] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 71721 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:19.063 00:11:19.063 real 0m4.229s 00:11:19.063 user 0m12.077s 00:11:19.063 sys 0m1.011s 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:19.063 18:24:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:11:19.063 ************************************ 00:11:19.063 END TEST nvmf_abort 00:11:19.063 ************************************ 00:11:19.063 18:24:34 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:19.063 18:24:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:19.063 18:24:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:19.063 18:24:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:19.063 ************************************ 00:11:19.063 START TEST nvmf_ns_hotplug_stress 00:11:19.063 ************************************ 00:11:19.063 18:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:19.383 * Looking for test storage... 00:11:19.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:19.383 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:19.384 Cannot find device "nvmf_tgt_br" 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # true 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:19.384 Cannot find device "nvmf_tgt_br2" 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # true 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:19.384 Cannot find device "nvmf_tgt_br" 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # true 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:19.384 Cannot find device "nvmf_tgt_br2" 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # true 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:19.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:19.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:19.384 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:19.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:19.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:11:19.643 00:11:19.643 --- 10.0.0.2 ping statistics --- 00:11:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.643 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:19.643 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:19.643 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:11:19.643 00:11:19.643 --- 10.0.0.3 ping statistics --- 00:11:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.643 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:19.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:19.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:11:19.643 00:11:19.643 --- 10.0.0.1 ping statistics --- 00:11:19.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:19.643 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@433 -- # return 0 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=71976 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 71976 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 71976 ']' 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:19.643 18:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.643 [2024-05-13 18:24:35.526047] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:11:19.643 [2024-05-13 18:24:35.526144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.902 [2024-05-13 18:24:35.663626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:19.902 [2024-05-13 18:24:35.781097] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.902 [2024-05-13 18:24:35.781158] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.902 [2024-05-13 18:24:35.781170] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.902 [2024-05-13 18:24:35.781178] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.902 [2024-05-13 18:24:35.781185] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.902 [2024-05-13 18:24:35.781345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.902 [2024-05-13 18:24:35.781847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.902 [2024-05-13 18:24:35.781857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:20.836 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:21.094 [2024-05-13 18:24:36.824978] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.094 18:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:21.352 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.610 [2024-05-13 18:24:37.315152] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:21.610 [2024-05-13 18:24:37.315451] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.610 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:21.868 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:22.126 Malloc0 00:11:22.126 18:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:22.384 Delay0 00:11:22.384 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.642 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:22.902 NULL1 00:11:22.902 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:23.159 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=72111 00:11:23.159 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:23.159 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:23.159 18:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.535 Read completed with error (sct=0, sc=11) 00:11:24.535 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:24.535 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:24.535 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:24.793 true 00:11:24.793 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:24.793 18:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.729 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.729 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:25.729 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:25.986 true 00:11:25.986 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:25.986 18:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.245 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.503 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:26.503 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:26.761 true 00:11:26.761 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:26.761 18:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.696 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.954 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:27.954 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:27.954 true 00:11:28.211 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:28.211 18:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:28.211 18:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.475 18:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:28.475 18:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:28.766 true 00:11:28.766 18:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:28.766 18:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.700 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.958 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:29.958 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:30.216 true 00:11:30.216 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:30.216 18:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.474 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.733 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:30.733 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:30.991 true 00:11:30.991 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:30.991 18:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.249 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.508 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:31.508 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:31.766 true 00:11:31.766 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:31.766 18:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.702 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.702 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:32.961 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:32.961 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:32.961 true 00:11:32.961 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:32.961 18:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.219 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.786 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:33.786 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:33.786 true 00:11:33.786 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:33.786 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.045 18:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.303 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:34.303 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:34.561 true 00:11:34.561 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:34.561 18:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.498 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.088 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:36.088 18:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:36.088 true 00:11:36.088 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:36.088 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.359 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.617 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:36.617 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:36.875 true 00:11:36.875 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:36.875 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.134 18:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.392 18:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:37.392 18:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:37.650 true 00:11:37.650 18:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:37.650 18:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.585 18:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:38.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:38.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:38.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:38.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:38.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:38.843 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:38.843 18:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:38.843 18:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:39.100 true 00:11:39.100 18:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:39.100 18:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.035 18:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:40.293 18:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:40.293 18:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:40.550 true 00:11:40.550 18:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:40.550 18:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.806 18:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.063 18:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:41.063 18:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:41.063 true 00:11:41.320 18:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:41.320 18:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.320 18:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:41.578 18:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:41.578 18:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:41.835 true 00:11:41.835 18:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:41.835 18:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:43.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.209 18:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:43.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.209 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:43.209 18:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:43.209 18:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:43.467 true 00:11:43.467 18:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:43.467 18:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.401 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:44.401 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:44.401 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:44.659 true 00:11:44.659 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:44.659 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.917 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:45.175 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:45.175 18:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:45.433 true 00:11:45.433 18:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:45.433 18:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.368 18:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:46.368 18:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:46.368 18:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:46.626 true 00:11:46.626 18:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:46.626 18:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:46.884 18:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.142 18:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:47.142 18:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:47.401 true 00:11:47.401 18:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:47.401 18:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:47.999 18:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:47.999 18:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:47.999 18:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:48.257 true 00:11:48.257 18:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:48.257 18:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.192 18:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:49.450 18:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:49.450 18:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:49.708 true 00:11:49.708 18:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:49.708 18:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.967 18:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.225 18:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:50.225 18:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:50.483 true 00:11:50.483 18:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:50.483 18:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:50.741 18:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:50.998 18:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:50.998 18:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:51.256 true 00:11:51.256 18:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:51.256 18:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:52.232 18:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:52.490 18:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:52.490 18:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:52.747 true 00:11:52.747 18:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:52.747 18:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.005 18:25:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:53.263 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:53.263 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:53.263 Initializing NVMe Controllers 00:11:53.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:53.263 Controller IO queue size 128, less than required. 00:11:53.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:53.263 Controller IO queue size 128, less than required. 00:11:53.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:53.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:53.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:53.263 Initialization complete. Launching workers. 00:11:53.263 ======================================================== 00:11:53.263 Latency(us) 00:11:53.263 Device Information : IOPS MiB/s Average min max 00:11:53.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 790.77 0.39 78810.90 3387.89 1155533.41 00:11:53.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9966.22 4.87 12844.56 3308.11 556634.59 00:11:53.263 ======================================================== 00:11:53.263 Total : 10757.00 5.25 17693.91 3308.11 1155533.41 00:11:53.263 00:11:53.520 true 00:11:53.520 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 72111 00:11:53.520 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (72111) - No such process 00:11:53.520 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 72111 00:11:53.520 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:53.777 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:54.035 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:54.035 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:54.035 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:54.035 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.035 18:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:54.292 null0 00:11:54.292 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.292 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.292 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:54.550 null1 00:11:54.550 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.550 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.550 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:54.807 null2 00:11:54.807 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:54.807 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:54.807 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:55.064 null3 00:11:55.064 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.064 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.064 18:25:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:55.321 null4 00:11:55.321 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.321 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.321 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:55.578 null5 00:11:55.578 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.578 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.578 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:55.836 null6 00:11:55.836 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:55.836 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:55.836 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:56.094 null7 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:56.094 18:25:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 73146 73148 73149 73151 73153 73155 73157 73158 00:11:56.353 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.353 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.353 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:56.353 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.353 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.353 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.353 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.612 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:56.871 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.128 18:25:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.128 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.388 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.647 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:57.905 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.163 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.163 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.163 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.163 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.163 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.163 18:25:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.163 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.421 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.679 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:58.937 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.195 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.195 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.195 18:25:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.195 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.452 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.452 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.453 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:59.712 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:59.971 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.230 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.230 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.230 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.230 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.230 18:25:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.230 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:00.494 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:00.756 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.014 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:01.274 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.274 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.274 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:01.274 18:25:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:01.274 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:01.274 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:01.274 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.274 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.274 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:01.274 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:01.274 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.533 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:01.792 rmmod nvme_tcp 00:12:01.792 rmmod nvme_fabrics 00:12:01.792 rmmod nvme_keyring 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 71976 ']' 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 71976 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 71976 ']' 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 71976 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 71976 00:12:01.792 killing process with pid 71976 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 71976' 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 71976 00:12:01.792 18:25:17 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 71976 00:12:01.792 [2024-05-13 18:25:17.679665] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:02.377 00:12:02.377 real 0m43.117s 00:12:02.377 user 3m27.316s 00:12:02.377 sys 0m12.860s 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:02.377 ************************************ 00:12:02.377 END TEST nvmf_ns_hotplug_stress 00:12:02.377 18:25:18 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.377 ************************************ 00:12:02.377 18:25:18 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:02.377 18:25:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:02.377 18:25:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:02.377 18:25:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:02.377 ************************************ 00:12:02.377 START TEST nvmf_connect_stress 00:12:02.377 ************************************ 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:02.377 * Looking for test storage... 00:12:02.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.377 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:02.378 Cannot find device "nvmf_tgt_br" 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@155 -- # true 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:02.378 Cannot find device "nvmf_tgt_br2" 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@156 -- # true 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:02.378 Cannot find device "nvmf_tgt_br" 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@158 -- # true 00:12:02.378 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:02.637 Cannot find device "nvmf_tgt_br2" 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@159 -- # true 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:02.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:02.637 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:02.637 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:02.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:02.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:02.896 00:12:02.896 --- 10.0.0.2 ping statistics --- 00:12:02.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.896 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:02.896 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:02.896 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:12:02.896 00:12:02.896 --- 10.0.0.3 ping statistics --- 00:12:02.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.896 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:02.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:02.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:02.896 00:12:02.896 --- 10.0.0.1 ping statistics --- 00:12:02.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:02.896 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@433 -- # return 0 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=74461 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 74461 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 74461 ']' 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:02.896 18:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.896 [2024-05-13 18:25:18.681672] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:02.896 [2024-05-13 18:25:18.681779] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.896 [2024-05-13 18:25:18.823416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:03.155 [2024-05-13 18:25:18.952447] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.155 [2024-05-13 18:25:18.952514] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.155 [2024-05-13 18:25:18.952528] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.155 [2024-05-13 18:25:18.952539] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.155 [2024-05-13 18:25:18.952549] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.155 [2024-05-13 18:25:18.952735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.155 [2024-05-13 18:25:18.953252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.155 [2024-05-13 18:25:18.953265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.090 [2024-05-13 18:25:19.815937] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.090 [2024-05-13 18:25:19.835847] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:04.090 [2024-05-13 18:25:19.836246] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.090 NULL1 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.090 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74515 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.091 18:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.350 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.350 18:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:04.350 18:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.350 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.350 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:04.916 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.916 18:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:04.916 18:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:04.916 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.916 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.174 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.174 18:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:05.174 18:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.174 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.174 18:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.433 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.433 18:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:05.433 18:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.433 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.433 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.691 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.691 18:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:05.691 18:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.691 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.691 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:05.949 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.949 18:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:05.949 18:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:05.949 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.949 18:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.532 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.532 18:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:06.532 18:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.532 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.532 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.790 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.790 18:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:06.790 18:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.790 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.790 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.049 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.049 18:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:07.049 18:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.049 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.049 18:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.307 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.307 18:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:07.307 18:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.307 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.307 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.564 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:07.564 18:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:07.564 18:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.564 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:07.564 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.127 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.127 18:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:08.127 18:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.127 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.127 18:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.385 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.385 18:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:08.385 18:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.385 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.385 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.654 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.654 18:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:08.654 18:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.654 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.654 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.912 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.912 18:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:08.912 18:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.912 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.912 18:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.171 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.171 18:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:09.171 18:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.171 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.171 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.738 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.738 18:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:09.738 18:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.738 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.738 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.996 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.996 18:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:09.996 18:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.996 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.996 18:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.254 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.254 18:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:10.254 18:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.254 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.254 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.512 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.512 18:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:10.512 18:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.512 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.512 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.771 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:10.771 18:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:10.771 18:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.771 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:10.771 18:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.336 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.336 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:11.336 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.336 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.336 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.595 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.595 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:11.595 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.595 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.595 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.853 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.853 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:11.853 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.853 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.853 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.112 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.112 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:12.112 18:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.112 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.112 18:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.371 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.371 18:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:12.371 18:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.371 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.371 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.938 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:12.938 18:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:12.938 18:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.938 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:12.938 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.196 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.196 18:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:13.196 18:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.196 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.196 18:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.453 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.453 18:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:13.453 18:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.453 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.453 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.711 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.711 18:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:13.711 18:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.711 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.711 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.279 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.279 18:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:14.279 18:25:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.279 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.279 18:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.279 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74515 00:12:14.547 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74515) - No such process 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74515 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:14.547 rmmod nvme_tcp 00:12:14.547 rmmod nvme_fabrics 00:12:14.547 rmmod nvme_keyring 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 74461 ']' 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 74461 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 74461 ']' 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 74461 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74461 00:12:14.547 killing process with pid 74461 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74461' 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 74461 00:12:14.547 [2024-05-13 18:25:30.352584] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:14.547 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 74461 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:14.806 ************************************ 00:12:14.806 END TEST nvmf_connect_stress 00:12:14.806 ************************************ 00:12:14.806 00:12:14.806 real 0m12.483s 00:12:14.806 user 0m41.465s 00:12:14.806 sys 0m3.363s 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:14.806 18:25:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.806 18:25:30 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:14.806 18:25:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:14.806 18:25:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:14.806 18:25:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:14.806 ************************************ 00:12:14.806 START TEST nvmf_fused_ordering 00:12:14.806 ************************************ 00:12:14.806 18:25:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:15.064 * Looking for test storage... 00:12:15.064 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:15.064 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:15.065 Cannot find device "nvmf_tgt_br" 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@155 -- # true 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:15.065 Cannot find device "nvmf_tgt_br2" 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@156 -- # true 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:15.065 Cannot find device "nvmf_tgt_br" 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@158 -- # true 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:15.065 Cannot find device "nvmf_tgt_br2" 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@159 -- # true 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:15.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:15.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:15.065 18:25:30 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:15.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:12:15.323 00:12:15.323 --- 10.0.0.2 ping statistics --- 00:12:15.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.323 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:15.323 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:15.323 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:12:15.323 00:12:15.323 --- 10.0.0.3 ping statistics --- 00:12:15.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.323 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:15.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:15.323 00:12:15.323 --- 10.0.0.1 ping statistics --- 00:12:15.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.323 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@433 -- # return 0 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.323 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=74843 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 74843 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 74843 ']' 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:15.324 18:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:15.324 [2024-05-13 18:25:31.223510] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:15.324 [2024-05-13 18:25:31.223632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.583 [2024-05-13 18:25:31.355738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:15.583 [2024-05-13 18:25:31.483691] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.583 [2024-05-13 18:25:31.483759] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.583 [2024-05-13 18:25:31.483787] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.583 [2024-05-13 18:25:31.483796] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.583 [2024-05-13 18:25:31.483803] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.583 [2024-05-13 18:25:31.483828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 [2024-05-13 18:25:32.188912] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 [2024-05-13 18:25:32.204834] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:16.519 [2024-05-13 18:25:32.205097] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 NULL1 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:16.519 18:25:32 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:16.519 [2024-05-13 18:25:32.259159] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:16.519 [2024-05-13 18:25:32.259213] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74893 ] 00:12:16.777 Attached to nqn.2016-06.io.spdk:cnode1 00:12:16.777 Namespace ID: 1 size: 1GB 00:12:16.777 fused_ordering(0) 00:12:16.777 fused_ordering(1) 00:12:16.777 fused_ordering(2) 00:12:16.777 fused_ordering(3) 00:12:16.777 fused_ordering(4) 00:12:16.777 fused_ordering(5) 00:12:16.777 fused_ordering(6) 00:12:16.777 fused_ordering(7) 00:12:16.777 fused_ordering(8) 00:12:16.777 fused_ordering(9) 00:12:16.777 fused_ordering(10) 00:12:16.777 fused_ordering(11) 00:12:16.777 fused_ordering(12) 00:12:16.777 fused_ordering(13) 00:12:16.777 fused_ordering(14) 00:12:16.777 fused_ordering(15) 00:12:16.777 fused_ordering(16) 00:12:16.777 fused_ordering(17) 00:12:16.777 fused_ordering(18) 00:12:16.777 fused_ordering(19) 00:12:16.777 fused_ordering(20) 00:12:16.777 fused_ordering(21) 00:12:16.777 fused_ordering(22) 00:12:16.777 fused_ordering(23) 00:12:16.777 fused_ordering(24) 00:12:16.777 fused_ordering(25) 00:12:16.777 fused_ordering(26) 00:12:16.777 fused_ordering(27) 00:12:16.777 fused_ordering(28) 00:12:16.777 fused_ordering(29) 00:12:16.777 fused_ordering(30) 00:12:16.777 fused_ordering(31) 00:12:16.777 fused_ordering(32) 00:12:16.777 fused_ordering(33) 00:12:16.777 fused_ordering(34) 00:12:16.777 fused_ordering(35) 00:12:16.777 fused_ordering(36) 00:12:16.777 fused_ordering(37) 00:12:16.777 fused_ordering(38) 00:12:16.777 fused_ordering(39) 00:12:16.777 fused_ordering(40) 00:12:16.777 fused_ordering(41) 00:12:16.777 fused_ordering(42) 00:12:16.777 fused_ordering(43) 00:12:16.777 fused_ordering(44) 00:12:16.777 fused_ordering(45) 00:12:16.777 fused_ordering(46) 00:12:16.777 fused_ordering(47) 00:12:16.777 fused_ordering(48) 00:12:16.777 fused_ordering(49) 00:12:16.777 fused_ordering(50) 00:12:16.777 fused_ordering(51) 00:12:16.777 fused_ordering(52) 00:12:16.777 fused_ordering(53) 00:12:16.777 fused_ordering(54) 00:12:16.777 fused_ordering(55) 00:12:16.777 fused_ordering(56) 00:12:16.777 fused_ordering(57) 00:12:16.777 fused_ordering(58) 00:12:16.777 fused_ordering(59) 00:12:16.777 fused_ordering(60) 00:12:16.777 fused_ordering(61) 00:12:16.777 fused_ordering(62) 00:12:16.777 fused_ordering(63) 00:12:16.777 fused_ordering(64) 00:12:16.777 fused_ordering(65) 00:12:16.777 fused_ordering(66) 00:12:16.777 fused_ordering(67) 00:12:16.777 fused_ordering(68) 00:12:16.777 fused_ordering(69) 00:12:16.777 fused_ordering(70) 00:12:16.777 fused_ordering(71) 00:12:16.777 fused_ordering(72) 00:12:16.777 fused_ordering(73) 00:12:16.777 fused_ordering(74) 00:12:16.777 fused_ordering(75) 00:12:16.777 fused_ordering(76) 00:12:16.777 fused_ordering(77) 00:12:16.777 fused_ordering(78) 00:12:16.777 fused_ordering(79) 00:12:16.777 fused_ordering(80) 00:12:16.777 fused_ordering(81) 00:12:16.777 fused_ordering(82) 00:12:16.777 fused_ordering(83) 00:12:16.777 fused_ordering(84) 00:12:16.777 fused_ordering(85) 00:12:16.777 fused_ordering(86) 00:12:16.777 fused_ordering(87) 00:12:16.777 fused_ordering(88) 00:12:16.777 fused_ordering(89) 00:12:16.777 fused_ordering(90) 00:12:16.777 fused_ordering(91) 00:12:16.777 fused_ordering(92) 00:12:16.777 fused_ordering(93) 00:12:16.777 fused_ordering(94) 00:12:16.777 fused_ordering(95) 00:12:16.777 fused_ordering(96) 00:12:16.777 fused_ordering(97) 00:12:16.777 fused_ordering(98) 00:12:16.777 fused_ordering(99) 00:12:16.777 fused_ordering(100) 00:12:16.777 fused_ordering(101) 00:12:16.777 fused_ordering(102) 00:12:16.777 fused_ordering(103) 00:12:16.777 fused_ordering(104) 00:12:16.777 fused_ordering(105) 00:12:16.777 fused_ordering(106) 00:12:16.777 fused_ordering(107) 00:12:16.777 fused_ordering(108) 00:12:16.777 fused_ordering(109) 00:12:16.777 fused_ordering(110) 00:12:16.777 fused_ordering(111) 00:12:16.777 fused_ordering(112) 00:12:16.777 fused_ordering(113) 00:12:16.777 fused_ordering(114) 00:12:16.777 fused_ordering(115) 00:12:16.777 fused_ordering(116) 00:12:16.777 fused_ordering(117) 00:12:16.777 fused_ordering(118) 00:12:16.777 fused_ordering(119) 00:12:16.777 fused_ordering(120) 00:12:16.778 fused_ordering(121) 00:12:16.778 fused_ordering(122) 00:12:16.778 fused_ordering(123) 00:12:16.778 fused_ordering(124) 00:12:16.778 fused_ordering(125) 00:12:16.778 fused_ordering(126) 00:12:16.778 fused_ordering(127) 00:12:16.778 fused_ordering(128) 00:12:16.778 fused_ordering(129) 00:12:16.778 fused_ordering(130) 00:12:16.778 fused_ordering(131) 00:12:16.778 fused_ordering(132) 00:12:16.778 fused_ordering(133) 00:12:16.778 fused_ordering(134) 00:12:16.778 fused_ordering(135) 00:12:16.778 fused_ordering(136) 00:12:16.778 fused_ordering(137) 00:12:16.778 fused_ordering(138) 00:12:16.778 fused_ordering(139) 00:12:16.778 fused_ordering(140) 00:12:16.778 fused_ordering(141) 00:12:16.778 fused_ordering(142) 00:12:16.778 fused_ordering(143) 00:12:16.778 fused_ordering(144) 00:12:16.778 fused_ordering(145) 00:12:16.778 fused_ordering(146) 00:12:16.778 fused_ordering(147) 00:12:16.778 fused_ordering(148) 00:12:16.778 fused_ordering(149) 00:12:16.778 fused_ordering(150) 00:12:16.778 fused_ordering(151) 00:12:16.778 fused_ordering(152) 00:12:16.778 fused_ordering(153) 00:12:16.778 fused_ordering(154) 00:12:16.778 fused_ordering(155) 00:12:16.778 fused_ordering(156) 00:12:16.778 fused_ordering(157) 00:12:16.778 fused_ordering(158) 00:12:16.778 fused_ordering(159) 00:12:16.778 fused_ordering(160) 00:12:16.778 fused_ordering(161) 00:12:16.778 fused_ordering(162) 00:12:16.778 fused_ordering(163) 00:12:16.778 fused_ordering(164) 00:12:16.778 fused_ordering(165) 00:12:16.778 fused_ordering(166) 00:12:16.778 fused_ordering(167) 00:12:16.778 fused_ordering(168) 00:12:16.778 fused_ordering(169) 00:12:16.778 fused_ordering(170) 00:12:16.778 fused_ordering(171) 00:12:16.778 fused_ordering(172) 00:12:16.778 fused_ordering(173) 00:12:16.778 fused_ordering(174) 00:12:16.778 fused_ordering(175) 00:12:16.778 fused_ordering(176) 00:12:16.778 fused_ordering(177) 00:12:16.778 fused_ordering(178) 00:12:16.778 fused_ordering(179) 00:12:16.778 fused_ordering(180) 00:12:16.778 fused_ordering(181) 00:12:16.778 fused_ordering(182) 00:12:16.778 fused_ordering(183) 00:12:16.778 fused_ordering(184) 00:12:16.778 fused_ordering(185) 00:12:16.778 fused_ordering(186) 00:12:16.778 fused_ordering(187) 00:12:16.778 fused_ordering(188) 00:12:16.778 fused_ordering(189) 00:12:16.778 fused_ordering(190) 00:12:16.778 fused_ordering(191) 00:12:16.778 fused_ordering(192) 00:12:16.778 fused_ordering(193) 00:12:16.778 fused_ordering(194) 00:12:16.778 fused_ordering(195) 00:12:16.778 fused_ordering(196) 00:12:16.778 fused_ordering(197) 00:12:16.778 fused_ordering(198) 00:12:16.778 fused_ordering(199) 00:12:16.778 fused_ordering(200) 00:12:16.778 fused_ordering(201) 00:12:16.778 fused_ordering(202) 00:12:16.778 fused_ordering(203) 00:12:16.778 fused_ordering(204) 00:12:16.778 fused_ordering(205) 00:12:17.035 fused_ordering(206) 00:12:17.035 fused_ordering(207) 00:12:17.035 fused_ordering(208) 00:12:17.035 fused_ordering(209) 00:12:17.035 fused_ordering(210) 00:12:17.035 fused_ordering(211) 00:12:17.035 fused_ordering(212) 00:12:17.035 fused_ordering(213) 00:12:17.035 fused_ordering(214) 00:12:17.035 fused_ordering(215) 00:12:17.035 fused_ordering(216) 00:12:17.035 fused_ordering(217) 00:12:17.035 fused_ordering(218) 00:12:17.035 fused_ordering(219) 00:12:17.035 fused_ordering(220) 00:12:17.035 fused_ordering(221) 00:12:17.035 fused_ordering(222) 00:12:17.035 fused_ordering(223) 00:12:17.035 fused_ordering(224) 00:12:17.035 fused_ordering(225) 00:12:17.035 fused_ordering(226) 00:12:17.035 fused_ordering(227) 00:12:17.035 fused_ordering(228) 00:12:17.035 fused_ordering(229) 00:12:17.035 fused_ordering(230) 00:12:17.035 fused_ordering(231) 00:12:17.035 fused_ordering(232) 00:12:17.035 fused_ordering(233) 00:12:17.035 fused_ordering(234) 00:12:17.035 fused_ordering(235) 00:12:17.035 fused_ordering(236) 00:12:17.035 fused_ordering(237) 00:12:17.035 fused_ordering(238) 00:12:17.035 fused_ordering(239) 00:12:17.035 fused_ordering(240) 00:12:17.035 fused_ordering(241) 00:12:17.035 fused_ordering(242) 00:12:17.035 fused_ordering(243) 00:12:17.035 fused_ordering(244) 00:12:17.035 fused_ordering(245) 00:12:17.035 fused_ordering(246) 00:12:17.035 fused_ordering(247) 00:12:17.035 fused_ordering(248) 00:12:17.035 fused_ordering(249) 00:12:17.035 fused_ordering(250) 00:12:17.035 fused_ordering(251) 00:12:17.035 fused_ordering(252) 00:12:17.035 fused_ordering(253) 00:12:17.035 fused_ordering(254) 00:12:17.035 fused_ordering(255) 00:12:17.035 fused_ordering(256) 00:12:17.035 fused_ordering(257) 00:12:17.035 fused_ordering(258) 00:12:17.035 fused_ordering(259) 00:12:17.035 fused_ordering(260) 00:12:17.035 fused_ordering(261) 00:12:17.035 fused_ordering(262) 00:12:17.035 fused_ordering(263) 00:12:17.035 fused_ordering(264) 00:12:17.035 fused_ordering(265) 00:12:17.035 fused_ordering(266) 00:12:17.035 fused_ordering(267) 00:12:17.035 fused_ordering(268) 00:12:17.035 fused_ordering(269) 00:12:17.035 fused_ordering(270) 00:12:17.035 fused_ordering(271) 00:12:17.035 fused_ordering(272) 00:12:17.035 fused_ordering(273) 00:12:17.035 fused_ordering(274) 00:12:17.035 fused_ordering(275) 00:12:17.035 fused_ordering(276) 00:12:17.035 fused_ordering(277) 00:12:17.035 fused_ordering(278) 00:12:17.035 fused_ordering(279) 00:12:17.035 fused_ordering(280) 00:12:17.035 fused_ordering(281) 00:12:17.035 fused_ordering(282) 00:12:17.035 fused_ordering(283) 00:12:17.035 fused_ordering(284) 00:12:17.035 fused_ordering(285) 00:12:17.035 fused_ordering(286) 00:12:17.035 fused_ordering(287) 00:12:17.035 fused_ordering(288) 00:12:17.035 fused_ordering(289) 00:12:17.035 fused_ordering(290) 00:12:17.035 fused_ordering(291) 00:12:17.035 fused_ordering(292) 00:12:17.035 fused_ordering(293) 00:12:17.035 fused_ordering(294) 00:12:17.035 fused_ordering(295) 00:12:17.035 fused_ordering(296) 00:12:17.035 fused_ordering(297) 00:12:17.035 fused_ordering(298) 00:12:17.035 fused_ordering(299) 00:12:17.035 fused_ordering(300) 00:12:17.035 fused_ordering(301) 00:12:17.035 fused_ordering(302) 00:12:17.035 fused_ordering(303) 00:12:17.035 fused_ordering(304) 00:12:17.035 fused_ordering(305) 00:12:17.035 fused_ordering(306) 00:12:17.035 fused_ordering(307) 00:12:17.035 fused_ordering(308) 00:12:17.035 fused_ordering(309) 00:12:17.035 fused_ordering(310) 00:12:17.035 fused_ordering(311) 00:12:17.035 fused_ordering(312) 00:12:17.035 fused_ordering(313) 00:12:17.035 fused_ordering(314) 00:12:17.035 fused_ordering(315) 00:12:17.035 fused_ordering(316) 00:12:17.035 fused_ordering(317) 00:12:17.035 fused_ordering(318) 00:12:17.035 fused_ordering(319) 00:12:17.035 fused_ordering(320) 00:12:17.035 fused_ordering(321) 00:12:17.035 fused_ordering(322) 00:12:17.035 fused_ordering(323) 00:12:17.036 fused_ordering(324) 00:12:17.036 fused_ordering(325) 00:12:17.036 fused_ordering(326) 00:12:17.036 fused_ordering(327) 00:12:17.036 fused_ordering(328) 00:12:17.036 fused_ordering(329) 00:12:17.036 fused_ordering(330) 00:12:17.036 fused_ordering(331) 00:12:17.036 fused_ordering(332) 00:12:17.036 fused_ordering(333) 00:12:17.036 fused_ordering(334) 00:12:17.036 fused_ordering(335) 00:12:17.036 fused_ordering(336) 00:12:17.036 fused_ordering(337) 00:12:17.036 fused_ordering(338) 00:12:17.036 fused_ordering(339) 00:12:17.036 fused_ordering(340) 00:12:17.036 fused_ordering(341) 00:12:17.036 fused_ordering(342) 00:12:17.036 fused_ordering(343) 00:12:17.036 fused_ordering(344) 00:12:17.036 fused_ordering(345) 00:12:17.036 fused_ordering(346) 00:12:17.036 fused_ordering(347) 00:12:17.036 fused_ordering(348) 00:12:17.036 fused_ordering(349) 00:12:17.036 fused_ordering(350) 00:12:17.036 fused_ordering(351) 00:12:17.036 fused_ordering(352) 00:12:17.036 fused_ordering(353) 00:12:17.036 fused_ordering(354) 00:12:17.036 fused_ordering(355) 00:12:17.036 fused_ordering(356) 00:12:17.036 fused_ordering(357) 00:12:17.036 fused_ordering(358) 00:12:17.036 fused_ordering(359) 00:12:17.036 fused_ordering(360) 00:12:17.036 fused_ordering(361) 00:12:17.036 fused_ordering(362) 00:12:17.036 fused_ordering(363) 00:12:17.036 fused_ordering(364) 00:12:17.036 fused_ordering(365) 00:12:17.036 fused_ordering(366) 00:12:17.036 fused_ordering(367) 00:12:17.036 fused_ordering(368) 00:12:17.036 fused_ordering(369) 00:12:17.036 fused_ordering(370) 00:12:17.036 fused_ordering(371) 00:12:17.036 fused_ordering(372) 00:12:17.036 fused_ordering(373) 00:12:17.036 fused_ordering(374) 00:12:17.036 fused_ordering(375) 00:12:17.036 fused_ordering(376) 00:12:17.036 fused_ordering(377) 00:12:17.036 fused_ordering(378) 00:12:17.036 fused_ordering(379) 00:12:17.036 fused_ordering(380) 00:12:17.036 fused_ordering(381) 00:12:17.036 fused_ordering(382) 00:12:17.036 fused_ordering(383) 00:12:17.036 fused_ordering(384) 00:12:17.036 fused_ordering(385) 00:12:17.036 fused_ordering(386) 00:12:17.036 fused_ordering(387) 00:12:17.036 fused_ordering(388) 00:12:17.036 fused_ordering(389) 00:12:17.036 fused_ordering(390) 00:12:17.036 fused_ordering(391) 00:12:17.036 fused_ordering(392) 00:12:17.036 fused_ordering(393) 00:12:17.036 fused_ordering(394) 00:12:17.036 fused_ordering(395) 00:12:17.036 fused_ordering(396) 00:12:17.036 fused_ordering(397) 00:12:17.036 fused_ordering(398) 00:12:17.036 fused_ordering(399) 00:12:17.036 fused_ordering(400) 00:12:17.036 fused_ordering(401) 00:12:17.036 fused_ordering(402) 00:12:17.036 fused_ordering(403) 00:12:17.036 fused_ordering(404) 00:12:17.036 fused_ordering(405) 00:12:17.036 fused_ordering(406) 00:12:17.036 fused_ordering(407) 00:12:17.036 fused_ordering(408) 00:12:17.036 fused_ordering(409) 00:12:17.036 fused_ordering(410) 00:12:17.602 fused_ordering(411) 00:12:17.602 fused_ordering(412) 00:12:17.602 fused_ordering(413) 00:12:17.602 fused_ordering(414) 00:12:17.602 fused_ordering(415) 00:12:17.602 fused_ordering(416) 00:12:17.602 fused_ordering(417) 00:12:17.602 fused_ordering(418) 00:12:17.602 fused_ordering(419) 00:12:17.602 fused_ordering(420) 00:12:17.602 fused_ordering(421) 00:12:17.602 fused_ordering(422) 00:12:17.602 fused_ordering(423) 00:12:17.602 fused_ordering(424) 00:12:17.602 fused_ordering(425) 00:12:17.602 fused_ordering(426) 00:12:17.602 fused_ordering(427) 00:12:17.602 fused_ordering(428) 00:12:17.602 fused_ordering(429) 00:12:17.602 fused_ordering(430) 00:12:17.602 fused_ordering(431) 00:12:17.602 fused_ordering(432) 00:12:17.602 fused_ordering(433) 00:12:17.602 fused_ordering(434) 00:12:17.602 fused_ordering(435) 00:12:17.602 fused_ordering(436) 00:12:17.602 fused_ordering(437) 00:12:17.602 fused_ordering(438) 00:12:17.602 fused_ordering(439) 00:12:17.602 fused_ordering(440) 00:12:17.602 fused_ordering(441) 00:12:17.602 fused_ordering(442) 00:12:17.602 fused_ordering(443) 00:12:17.602 fused_ordering(444) 00:12:17.602 fused_ordering(445) 00:12:17.602 fused_ordering(446) 00:12:17.602 fused_ordering(447) 00:12:17.602 fused_ordering(448) 00:12:17.602 fused_ordering(449) 00:12:17.602 fused_ordering(450) 00:12:17.602 fused_ordering(451) 00:12:17.602 fused_ordering(452) 00:12:17.602 fused_ordering(453) 00:12:17.602 fused_ordering(454) 00:12:17.602 fused_ordering(455) 00:12:17.602 fused_ordering(456) 00:12:17.602 fused_ordering(457) 00:12:17.602 fused_ordering(458) 00:12:17.602 fused_ordering(459) 00:12:17.602 fused_ordering(460) 00:12:17.602 fused_ordering(461) 00:12:17.602 fused_ordering(462) 00:12:17.602 fused_ordering(463) 00:12:17.602 fused_ordering(464) 00:12:17.602 fused_ordering(465) 00:12:17.602 fused_ordering(466) 00:12:17.602 fused_ordering(467) 00:12:17.602 fused_ordering(468) 00:12:17.602 fused_ordering(469) 00:12:17.602 fused_ordering(470) 00:12:17.602 fused_ordering(471) 00:12:17.602 fused_ordering(472) 00:12:17.602 fused_ordering(473) 00:12:17.602 fused_ordering(474) 00:12:17.602 fused_ordering(475) 00:12:17.602 fused_ordering(476) 00:12:17.602 fused_ordering(477) 00:12:17.602 fused_ordering(478) 00:12:17.602 fused_ordering(479) 00:12:17.602 fused_ordering(480) 00:12:17.602 fused_ordering(481) 00:12:17.602 fused_ordering(482) 00:12:17.602 fused_ordering(483) 00:12:17.602 fused_ordering(484) 00:12:17.602 fused_ordering(485) 00:12:17.602 fused_ordering(486) 00:12:17.602 fused_ordering(487) 00:12:17.602 fused_ordering(488) 00:12:17.602 fused_ordering(489) 00:12:17.602 fused_ordering(490) 00:12:17.602 fused_ordering(491) 00:12:17.602 fused_ordering(492) 00:12:17.602 fused_ordering(493) 00:12:17.602 fused_ordering(494) 00:12:17.602 fused_ordering(495) 00:12:17.602 fused_ordering(496) 00:12:17.602 fused_ordering(497) 00:12:17.602 fused_ordering(498) 00:12:17.602 fused_ordering(499) 00:12:17.602 fused_ordering(500) 00:12:17.602 fused_ordering(501) 00:12:17.602 fused_ordering(502) 00:12:17.602 fused_ordering(503) 00:12:17.602 fused_ordering(504) 00:12:17.602 fused_ordering(505) 00:12:17.602 fused_ordering(506) 00:12:17.602 fused_ordering(507) 00:12:17.602 fused_ordering(508) 00:12:17.602 fused_ordering(509) 00:12:17.602 fused_ordering(510) 00:12:17.602 fused_ordering(511) 00:12:17.602 fused_ordering(512) 00:12:17.602 fused_ordering(513) 00:12:17.602 fused_ordering(514) 00:12:17.602 fused_ordering(515) 00:12:17.602 fused_ordering(516) 00:12:17.602 fused_ordering(517) 00:12:17.602 fused_ordering(518) 00:12:17.602 fused_ordering(519) 00:12:17.602 fused_ordering(520) 00:12:17.602 fused_ordering(521) 00:12:17.602 fused_ordering(522) 00:12:17.602 fused_ordering(523) 00:12:17.602 fused_ordering(524) 00:12:17.602 fused_ordering(525) 00:12:17.602 fused_ordering(526) 00:12:17.602 fused_ordering(527) 00:12:17.602 fused_ordering(528) 00:12:17.602 fused_ordering(529) 00:12:17.602 fused_ordering(530) 00:12:17.602 fused_ordering(531) 00:12:17.602 fused_ordering(532) 00:12:17.602 fused_ordering(533) 00:12:17.602 fused_ordering(534) 00:12:17.602 fused_ordering(535) 00:12:17.602 fused_ordering(536) 00:12:17.602 fused_ordering(537) 00:12:17.602 fused_ordering(538) 00:12:17.602 fused_ordering(539) 00:12:17.602 fused_ordering(540) 00:12:17.602 fused_ordering(541) 00:12:17.602 fused_ordering(542) 00:12:17.602 fused_ordering(543) 00:12:17.602 fused_ordering(544) 00:12:17.602 fused_ordering(545) 00:12:17.602 fused_ordering(546) 00:12:17.602 fused_ordering(547) 00:12:17.602 fused_ordering(548) 00:12:17.602 fused_ordering(549) 00:12:17.602 fused_ordering(550) 00:12:17.602 fused_ordering(551) 00:12:17.602 fused_ordering(552) 00:12:17.602 fused_ordering(553) 00:12:17.602 fused_ordering(554) 00:12:17.602 fused_ordering(555) 00:12:17.602 fused_ordering(556) 00:12:17.602 fused_ordering(557) 00:12:17.602 fused_ordering(558) 00:12:17.602 fused_ordering(559) 00:12:17.602 fused_ordering(560) 00:12:17.602 fused_ordering(561) 00:12:17.602 fused_ordering(562) 00:12:17.602 fused_ordering(563) 00:12:17.602 fused_ordering(564) 00:12:17.602 fused_ordering(565) 00:12:17.602 fused_ordering(566) 00:12:17.602 fused_ordering(567) 00:12:17.602 fused_ordering(568) 00:12:17.602 fused_ordering(569) 00:12:17.602 fused_ordering(570) 00:12:17.602 fused_ordering(571) 00:12:17.602 fused_ordering(572) 00:12:17.602 fused_ordering(573) 00:12:17.602 fused_ordering(574) 00:12:17.602 fused_ordering(575) 00:12:17.602 fused_ordering(576) 00:12:17.602 fused_ordering(577) 00:12:17.602 fused_ordering(578) 00:12:17.602 fused_ordering(579) 00:12:17.602 fused_ordering(580) 00:12:17.602 fused_ordering(581) 00:12:17.602 fused_ordering(582) 00:12:17.602 fused_ordering(583) 00:12:17.602 fused_ordering(584) 00:12:17.602 fused_ordering(585) 00:12:17.602 fused_ordering(586) 00:12:17.602 fused_ordering(587) 00:12:17.602 fused_ordering(588) 00:12:17.602 fused_ordering(589) 00:12:17.602 fused_ordering(590) 00:12:17.602 fused_ordering(591) 00:12:17.602 fused_ordering(592) 00:12:17.602 fused_ordering(593) 00:12:17.602 fused_ordering(594) 00:12:17.602 fused_ordering(595) 00:12:17.602 fused_ordering(596) 00:12:17.602 fused_ordering(597) 00:12:17.602 fused_ordering(598) 00:12:17.602 fused_ordering(599) 00:12:17.602 fused_ordering(600) 00:12:17.602 fused_ordering(601) 00:12:17.602 fused_ordering(602) 00:12:17.602 fused_ordering(603) 00:12:17.602 fused_ordering(604) 00:12:17.602 fused_ordering(605) 00:12:17.602 fused_ordering(606) 00:12:17.602 fused_ordering(607) 00:12:17.602 fused_ordering(608) 00:12:17.602 fused_ordering(609) 00:12:17.602 fused_ordering(610) 00:12:17.602 fused_ordering(611) 00:12:17.602 fused_ordering(612) 00:12:17.602 fused_ordering(613) 00:12:17.602 fused_ordering(614) 00:12:17.602 fused_ordering(615) 00:12:18.168 fused_ordering(616) 00:12:18.168 fused_ordering(617) 00:12:18.168 fused_ordering(618) 00:12:18.168 fused_ordering(619) 00:12:18.168 fused_ordering(620) 00:12:18.168 fused_ordering(621) 00:12:18.168 fused_ordering(622) 00:12:18.168 fused_ordering(623) 00:12:18.168 fused_ordering(624) 00:12:18.168 fused_ordering(625) 00:12:18.168 fused_ordering(626) 00:12:18.168 fused_ordering(627) 00:12:18.168 fused_ordering(628) 00:12:18.168 fused_ordering(629) 00:12:18.168 fused_ordering(630) 00:12:18.168 fused_ordering(631) 00:12:18.168 fused_ordering(632) 00:12:18.168 fused_ordering(633) 00:12:18.168 fused_ordering(634) 00:12:18.168 fused_ordering(635) 00:12:18.168 fused_ordering(636) 00:12:18.168 fused_ordering(637) 00:12:18.168 fused_ordering(638) 00:12:18.168 fused_ordering(639) 00:12:18.168 fused_ordering(640) 00:12:18.168 fused_ordering(641) 00:12:18.168 fused_ordering(642) 00:12:18.168 fused_ordering(643) 00:12:18.168 fused_ordering(644) 00:12:18.168 fused_ordering(645) 00:12:18.168 fused_ordering(646) 00:12:18.168 fused_ordering(647) 00:12:18.168 fused_ordering(648) 00:12:18.168 fused_ordering(649) 00:12:18.168 fused_ordering(650) 00:12:18.168 fused_ordering(651) 00:12:18.168 fused_ordering(652) 00:12:18.168 fused_ordering(653) 00:12:18.168 fused_ordering(654) 00:12:18.168 fused_ordering(655) 00:12:18.168 fused_ordering(656) 00:12:18.168 fused_ordering(657) 00:12:18.168 fused_ordering(658) 00:12:18.168 fused_ordering(659) 00:12:18.168 fused_ordering(660) 00:12:18.168 fused_ordering(661) 00:12:18.168 fused_ordering(662) 00:12:18.168 fused_ordering(663) 00:12:18.168 fused_ordering(664) 00:12:18.168 fused_ordering(665) 00:12:18.168 fused_ordering(666) 00:12:18.168 fused_ordering(667) 00:12:18.168 fused_ordering(668) 00:12:18.168 fused_ordering(669) 00:12:18.168 fused_ordering(670) 00:12:18.168 fused_ordering(671) 00:12:18.168 fused_ordering(672) 00:12:18.168 fused_ordering(673) 00:12:18.168 fused_ordering(674) 00:12:18.168 fused_ordering(675) 00:12:18.168 fused_ordering(676) 00:12:18.168 fused_ordering(677) 00:12:18.168 fused_ordering(678) 00:12:18.168 fused_ordering(679) 00:12:18.168 fused_ordering(680) 00:12:18.168 fused_ordering(681) 00:12:18.168 fused_ordering(682) 00:12:18.168 fused_ordering(683) 00:12:18.168 fused_ordering(684) 00:12:18.168 fused_ordering(685) 00:12:18.168 fused_ordering(686) 00:12:18.168 fused_ordering(687) 00:12:18.168 fused_ordering(688) 00:12:18.168 fused_ordering(689) 00:12:18.168 fused_ordering(690) 00:12:18.168 fused_ordering(691) 00:12:18.168 fused_ordering(692) 00:12:18.168 fused_ordering(693) 00:12:18.168 fused_ordering(694) 00:12:18.168 fused_ordering(695) 00:12:18.168 fused_ordering(696) 00:12:18.168 fused_ordering(697) 00:12:18.168 fused_ordering(698) 00:12:18.168 fused_ordering(699) 00:12:18.168 fused_ordering(700) 00:12:18.168 fused_ordering(701) 00:12:18.168 fused_ordering(702) 00:12:18.168 fused_ordering(703) 00:12:18.168 fused_ordering(704) 00:12:18.168 fused_ordering(705) 00:12:18.168 fused_ordering(706) 00:12:18.168 fused_ordering(707) 00:12:18.168 fused_ordering(708) 00:12:18.168 fused_ordering(709) 00:12:18.168 fused_ordering(710) 00:12:18.168 fused_ordering(711) 00:12:18.168 fused_ordering(712) 00:12:18.168 fused_ordering(713) 00:12:18.168 fused_ordering(714) 00:12:18.168 fused_ordering(715) 00:12:18.168 fused_ordering(716) 00:12:18.168 fused_ordering(717) 00:12:18.168 fused_ordering(718) 00:12:18.168 fused_ordering(719) 00:12:18.168 fused_ordering(720) 00:12:18.168 fused_ordering(721) 00:12:18.168 fused_ordering(722) 00:12:18.168 fused_ordering(723) 00:12:18.168 fused_ordering(724) 00:12:18.168 fused_ordering(725) 00:12:18.168 fused_ordering(726) 00:12:18.168 fused_ordering(727) 00:12:18.168 fused_ordering(728) 00:12:18.168 fused_ordering(729) 00:12:18.168 fused_ordering(730) 00:12:18.168 fused_ordering(731) 00:12:18.168 fused_ordering(732) 00:12:18.168 fused_ordering(733) 00:12:18.169 fused_ordering(734) 00:12:18.169 fused_ordering(735) 00:12:18.169 fused_ordering(736) 00:12:18.169 fused_ordering(737) 00:12:18.169 fused_ordering(738) 00:12:18.169 fused_ordering(739) 00:12:18.169 fused_ordering(740) 00:12:18.169 fused_ordering(741) 00:12:18.169 fused_ordering(742) 00:12:18.169 fused_ordering(743) 00:12:18.169 fused_ordering(744) 00:12:18.169 fused_ordering(745) 00:12:18.169 fused_ordering(746) 00:12:18.169 fused_ordering(747) 00:12:18.169 fused_ordering(748) 00:12:18.169 fused_ordering(749) 00:12:18.169 fused_ordering(750) 00:12:18.169 fused_ordering(751) 00:12:18.169 fused_ordering(752) 00:12:18.169 fused_ordering(753) 00:12:18.169 fused_ordering(754) 00:12:18.169 fused_ordering(755) 00:12:18.169 fused_ordering(756) 00:12:18.169 fused_ordering(757) 00:12:18.169 fused_ordering(758) 00:12:18.169 fused_ordering(759) 00:12:18.169 fused_ordering(760) 00:12:18.169 fused_ordering(761) 00:12:18.169 fused_ordering(762) 00:12:18.169 fused_ordering(763) 00:12:18.169 fused_ordering(764) 00:12:18.169 fused_ordering(765) 00:12:18.169 fused_ordering(766) 00:12:18.169 fused_ordering(767) 00:12:18.169 fused_ordering(768) 00:12:18.169 fused_ordering(769) 00:12:18.169 fused_ordering(770) 00:12:18.169 fused_ordering(771) 00:12:18.169 fused_ordering(772) 00:12:18.169 fused_ordering(773) 00:12:18.169 fused_ordering(774) 00:12:18.169 fused_ordering(775) 00:12:18.169 fused_ordering(776) 00:12:18.169 fused_ordering(777) 00:12:18.169 fused_ordering(778) 00:12:18.169 fused_ordering(779) 00:12:18.169 fused_ordering(780) 00:12:18.169 fused_ordering(781) 00:12:18.169 fused_ordering(782) 00:12:18.169 fused_ordering(783) 00:12:18.169 fused_ordering(784) 00:12:18.169 fused_ordering(785) 00:12:18.169 fused_ordering(786) 00:12:18.169 fused_ordering(787) 00:12:18.169 fused_ordering(788) 00:12:18.169 fused_ordering(789) 00:12:18.169 fused_ordering(790) 00:12:18.169 fused_ordering(791) 00:12:18.169 fused_ordering(792) 00:12:18.169 fused_ordering(793) 00:12:18.169 fused_ordering(794) 00:12:18.169 fused_ordering(795) 00:12:18.169 fused_ordering(796) 00:12:18.169 fused_ordering(797) 00:12:18.169 fused_ordering(798) 00:12:18.169 fused_ordering(799) 00:12:18.169 fused_ordering(800) 00:12:18.169 fused_ordering(801) 00:12:18.169 fused_ordering(802) 00:12:18.169 fused_ordering(803) 00:12:18.169 fused_ordering(804) 00:12:18.169 fused_ordering(805) 00:12:18.169 fused_ordering(806) 00:12:18.169 fused_ordering(807) 00:12:18.169 fused_ordering(808) 00:12:18.169 fused_ordering(809) 00:12:18.169 fused_ordering(810) 00:12:18.169 fused_ordering(811) 00:12:18.169 fused_ordering(812) 00:12:18.169 fused_ordering(813) 00:12:18.169 fused_ordering(814) 00:12:18.169 fused_ordering(815) 00:12:18.169 fused_ordering(816) 00:12:18.169 fused_ordering(817) 00:12:18.169 fused_ordering(818) 00:12:18.169 fused_ordering(819) 00:12:18.169 fused_ordering(820) 00:12:18.735 fused_ordering(821) 00:12:18.735 fused_ordering(822) 00:12:18.735 fused_ordering(823) 00:12:18.735 fused_ordering(824) 00:12:18.735 fused_ordering(825) 00:12:18.735 fused_ordering(826) 00:12:18.735 fused_ordering(827) 00:12:18.735 fused_ordering(828) 00:12:18.735 fused_ordering(829) 00:12:18.735 fused_ordering(830) 00:12:18.735 fused_ordering(831) 00:12:18.735 fused_ordering(832) 00:12:18.735 fused_ordering(833) 00:12:18.735 fused_ordering(834) 00:12:18.735 fused_ordering(835) 00:12:18.735 fused_ordering(836) 00:12:18.735 fused_ordering(837) 00:12:18.735 fused_ordering(838) 00:12:18.735 fused_ordering(839) 00:12:18.735 fused_ordering(840) 00:12:18.735 fused_ordering(841) 00:12:18.735 fused_ordering(842) 00:12:18.735 fused_ordering(843) 00:12:18.735 fused_ordering(844) 00:12:18.735 fused_ordering(845) 00:12:18.735 fused_ordering(846) 00:12:18.735 fused_ordering(847) 00:12:18.735 fused_ordering(848) 00:12:18.735 fused_ordering(849) 00:12:18.735 fused_ordering(850) 00:12:18.735 fused_ordering(851) 00:12:18.735 fused_ordering(852) 00:12:18.735 fused_ordering(853) 00:12:18.735 fused_ordering(854) 00:12:18.735 fused_ordering(855) 00:12:18.735 fused_ordering(856) 00:12:18.735 fused_ordering(857) 00:12:18.735 fused_ordering(858) 00:12:18.735 fused_ordering(859) 00:12:18.735 fused_ordering(860) 00:12:18.735 fused_ordering(861) 00:12:18.735 fused_ordering(862) 00:12:18.735 fused_ordering(863) 00:12:18.735 fused_ordering(864) 00:12:18.735 fused_ordering(865) 00:12:18.735 fused_ordering(866) 00:12:18.735 fused_ordering(867) 00:12:18.735 fused_ordering(868) 00:12:18.735 fused_ordering(869) 00:12:18.735 fused_ordering(870) 00:12:18.735 fused_ordering(871) 00:12:18.735 fused_ordering(872) 00:12:18.735 fused_ordering(873) 00:12:18.735 fused_ordering(874) 00:12:18.735 fused_ordering(875) 00:12:18.735 fused_ordering(876) 00:12:18.735 fused_ordering(877) 00:12:18.735 fused_ordering(878) 00:12:18.735 fused_ordering(879) 00:12:18.735 fused_ordering(880) 00:12:18.735 fused_ordering(881) 00:12:18.735 fused_ordering(882) 00:12:18.735 fused_ordering(883) 00:12:18.735 fused_ordering(884) 00:12:18.735 fused_ordering(885) 00:12:18.735 fused_ordering(886) 00:12:18.735 fused_ordering(887) 00:12:18.735 fused_ordering(888) 00:12:18.735 fused_ordering(889) 00:12:18.735 fused_ordering(890) 00:12:18.735 fused_ordering(891) 00:12:18.735 fused_ordering(892) 00:12:18.735 fused_ordering(893) 00:12:18.735 fused_ordering(894) 00:12:18.735 fused_ordering(895) 00:12:18.735 fused_ordering(896) 00:12:18.735 fused_ordering(897) 00:12:18.735 fused_ordering(898) 00:12:18.735 fused_ordering(899) 00:12:18.735 fused_ordering(900) 00:12:18.735 fused_ordering(901) 00:12:18.735 fused_ordering(902) 00:12:18.735 fused_ordering(903) 00:12:18.735 fused_ordering(904) 00:12:18.735 fused_ordering(905) 00:12:18.735 fused_ordering(906) 00:12:18.735 fused_ordering(907) 00:12:18.735 fused_ordering(908) 00:12:18.735 fused_ordering(909) 00:12:18.735 fused_ordering(910) 00:12:18.735 fused_ordering(911) 00:12:18.735 fused_ordering(912) 00:12:18.735 fused_ordering(913) 00:12:18.735 fused_ordering(914) 00:12:18.735 fused_ordering(915) 00:12:18.735 fused_ordering(916) 00:12:18.735 fused_ordering(917) 00:12:18.735 fused_ordering(918) 00:12:18.735 fused_ordering(919) 00:12:18.735 fused_ordering(920) 00:12:18.735 fused_ordering(921) 00:12:18.735 fused_ordering(922) 00:12:18.735 fused_ordering(923) 00:12:18.735 fused_ordering(924) 00:12:18.735 fused_ordering(925) 00:12:18.735 fused_ordering(926) 00:12:18.735 fused_ordering(927) 00:12:18.735 fused_ordering(928) 00:12:18.735 fused_ordering(929) 00:12:18.735 fused_ordering(930) 00:12:18.735 fused_ordering(931) 00:12:18.735 fused_ordering(932) 00:12:18.735 fused_ordering(933) 00:12:18.735 fused_ordering(934) 00:12:18.735 fused_ordering(935) 00:12:18.735 fused_ordering(936) 00:12:18.735 fused_ordering(937) 00:12:18.735 fused_ordering(938) 00:12:18.735 fused_ordering(939) 00:12:18.735 fused_ordering(940) 00:12:18.736 fused_ordering(941) 00:12:18.736 fused_ordering(942) 00:12:18.736 fused_ordering(943) 00:12:18.736 fused_ordering(944) 00:12:18.736 fused_ordering(945) 00:12:18.736 fused_ordering(946) 00:12:18.736 fused_ordering(947) 00:12:18.736 fused_ordering(948) 00:12:18.736 fused_ordering(949) 00:12:18.736 fused_ordering(950) 00:12:18.736 fused_ordering(951) 00:12:18.736 fused_ordering(952) 00:12:18.736 fused_ordering(953) 00:12:18.736 fused_ordering(954) 00:12:18.736 fused_ordering(955) 00:12:18.736 fused_ordering(956) 00:12:18.736 fused_ordering(957) 00:12:18.736 fused_ordering(958) 00:12:18.736 fused_ordering(959) 00:12:18.736 fused_ordering(960) 00:12:18.736 fused_ordering(961) 00:12:18.736 fused_ordering(962) 00:12:18.736 fused_ordering(963) 00:12:18.736 fused_ordering(964) 00:12:18.736 fused_ordering(965) 00:12:18.736 fused_ordering(966) 00:12:18.736 fused_ordering(967) 00:12:18.736 fused_ordering(968) 00:12:18.736 fused_ordering(969) 00:12:18.736 fused_ordering(970) 00:12:18.736 fused_ordering(971) 00:12:18.736 fused_ordering(972) 00:12:18.736 fused_ordering(973) 00:12:18.736 fused_ordering(974) 00:12:18.736 fused_ordering(975) 00:12:18.736 fused_ordering(976) 00:12:18.736 fused_ordering(977) 00:12:18.736 fused_ordering(978) 00:12:18.736 fused_ordering(979) 00:12:18.736 fused_ordering(980) 00:12:18.736 fused_ordering(981) 00:12:18.736 fused_ordering(982) 00:12:18.736 fused_ordering(983) 00:12:18.736 fused_ordering(984) 00:12:18.736 fused_ordering(985) 00:12:18.736 fused_ordering(986) 00:12:18.736 fused_ordering(987) 00:12:18.736 fused_ordering(988) 00:12:18.736 fused_ordering(989) 00:12:18.736 fused_ordering(990) 00:12:18.736 fused_ordering(991) 00:12:18.736 fused_ordering(992) 00:12:18.736 fused_ordering(993) 00:12:18.736 fused_ordering(994) 00:12:18.736 fused_ordering(995) 00:12:18.736 fused_ordering(996) 00:12:18.736 fused_ordering(997) 00:12:18.736 fused_ordering(998) 00:12:18.736 fused_ordering(999) 00:12:18.736 fused_ordering(1000) 00:12:18.736 fused_ordering(1001) 00:12:18.736 fused_ordering(1002) 00:12:18.736 fused_ordering(1003) 00:12:18.736 fused_ordering(1004) 00:12:18.736 fused_ordering(1005) 00:12:18.736 fused_ordering(1006) 00:12:18.736 fused_ordering(1007) 00:12:18.736 fused_ordering(1008) 00:12:18.736 fused_ordering(1009) 00:12:18.736 fused_ordering(1010) 00:12:18.736 fused_ordering(1011) 00:12:18.736 fused_ordering(1012) 00:12:18.736 fused_ordering(1013) 00:12:18.736 fused_ordering(1014) 00:12:18.736 fused_ordering(1015) 00:12:18.736 fused_ordering(1016) 00:12:18.736 fused_ordering(1017) 00:12:18.736 fused_ordering(1018) 00:12:18.736 fused_ordering(1019) 00:12:18.736 fused_ordering(1020) 00:12:18.736 fused_ordering(1021) 00:12:18.736 fused_ordering(1022) 00:12:18.736 fused_ordering(1023) 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.736 rmmod nvme_tcp 00:12:18.736 rmmod nvme_fabrics 00:12:18.736 rmmod nvme_keyring 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 74843 ']' 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 74843 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 74843 ']' 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 74843 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74843 00:12:18.736 killing process with pid 74843 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74843' 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 74843 00:12:18.736 [2024-05-13 18:25:34.538800] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:18.736 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 74843 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:18.994 00:12:18.994 real 0m4.146s 00:12:18.994 user 0m4.886s 00:12:18.994 sys 0m1.401s 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:18.994 18:25:34 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:18.994 ************************************ 00:12:18.994 END TEST nvmf_fused_ordering 00:12:18.994 ************************************ 00:12:18.994 18:25:34 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:18.994 18:25:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:18.994 18:25:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:18.994 18:25:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:18.994 ************************************ 00:12:18.994 START TEST nvmf_delete_subsystem 00:12:18.994 ************************************ 00:12:18.994 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:19.253 * Looking for test storage... 00:12:19.253 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:19.253 18:25:34 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:19.253 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:19.253 Cannot find device "nvmf_tgt_br" 00:12:19.253 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # true 00:12:19.253 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:19.253 Cannot find device "nvmf_tgt_br2" 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # true 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:19.254 Cannot find device "nvmf_tgt_br" 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # true 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:19.254 Cannot find device "nvmf_tgt_br2" 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # true 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:19.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:19.254 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:19.254 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:19.514 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:19.514 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:12:19.514 00:12:19.514 --- 10.0.0.2 ping statistics --- 00:12:19.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.514 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:19.514 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:19.514 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:12:19.514 00:12:19.514 --- 10.0.0.3 ping statistics --- 00:12:19.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.514 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:19.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:19.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:12:19.514 00:12:19.514 --- 10.0.0.1 ping statistics --- 00:12:19.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:19.514 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@433 -- # return 0 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=75106 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 75106 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 75106 ']' 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:19.514 18:25:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:19.514 [2024-05-13 18:25:35.371631] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:19.514 [2024-05-13 18:25:35.371710] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:19.774 [2024-05-13 18:25:35.508164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:19.774 [2024-05-13 18:25:35.610673] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:19.774 [2024-05-13 18:25:35.610995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:19.774 [2024-05-13 18:25:35.611127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:19.774 [2024-05-13 18:25:35.611184] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:19.774 [2024-05-13 18:25:35.611216] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:19.774 [2024-05-13 18:25:35.611486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.774 [2024-05-13 18:25:35.611477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.709 [2024-05-13 18:25:36.383218] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.709 [2024-05-13 18:25:36.403385] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:20.709 [2024-05-13 18:25:36.403631] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.709 NULL1 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.709 Delay0 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=75157 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:20.709 18:25:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:20.709 [2024-05-13 18:25:36.604092] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:22.642 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:22.642 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.642 18:25:38 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.901 starting I/O failed: -6 00:12:22.901 Write completed with error (sct=0, sc=8) 00:12:22.901 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 starting I/O failed: -6 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 starting I/O failed: -6 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 [2024-05-13 18:25:38.642840] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230fce0 is same with the state(5) to be set 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Write completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.902 Read completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Write completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Write completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Write completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Write completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:22.903 Read completed with error (sct=0, sc=8) 00:12:23.838 [2024-05-13 18:25:39.619445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230f100 is same with the state(5) to be set 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 [2024-05-13 18:25:39.638686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230fff0 is same with the state(5) to be set 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 [2024-05-13 18:25:39.638888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2311220 is same with the state(5) to be set 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 [2024-05-13 18:25:39.641785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe28800bfe0 is same with the state(5) to be set 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Write completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 Read completed with error (sct=0, sc=8) 00:12:23.838 [2024-05-13 18:25:39.642467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe28800c600 is same with the state(5) to be set 00:12:23.838 Initializing NVMe Controllers 00:12:23.838 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:23.838 Controller IO queue size 128, less than required. 00:12:23.838 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:23.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:23.838 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:23.838 Initialization complete. Launching workers. 00:12:23.838 ======================================================== 00:12:23.838 Latency(us) 00:12:23.838 Device Information : IOPS MiB/s Average min max 00:12:23.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.22 0.08 901858.77 2044.37 1013221.36 00:12:23.838 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 180.12 0.09 922821.45 459.54 1013656.67 00:12:23.838 ======================================================== 00:12:23.838 Total : 347.34 0.17 912729.42 459.54 1013656.67 00:12:23.838 00:12:23.838 [2024-05-13 18:25:39.643251] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230f100 (9): Bad file descriptor 00:12:23.838 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:23.838 18:25:39 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.838 18:25:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:23.838 18:25:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 75157 00:12:23.838 18:25:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 75157 00:12:24.405 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (75157) - No such process 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 75157 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 75157 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 75157 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 [2024-05-13 18:25:40.169140] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:24.405 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=75199 00:12:24.406 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:24.406 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:24.406 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:24.406 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:24.406 [2024-05-13 18:25:40.348067] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:24.972 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:24.972 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:24.972 18:25:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.571 18:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.571 18:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:25.571 18:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:25.829 18:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:25.829 18:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:25.829 18:25:41 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.396 18:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.396 18:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:26.396 18:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:26.961 18:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:26.961 18:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:26.961 18:25:42 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.528 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.528 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:27.528 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:27.528 Initializing NVMe Controllers 00:12:27.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.528 Controller IO queue size 128, less than required. 00:12:27.528 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:27.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:27.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:27.528 Initialization complete. Launching workers. 00:12:27.528 ======================================================== 00:12:27.528 Latency(us) 00:12:27.528 Device Information : IOPS MiB/s Average min max 00:12:27.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003668.20 1000135.03 1042173.50 00:12:27.528 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004831.81 1000125.41 1013836.80 00:12:27.528 ======================================================== 00:12:27.528 Total : 256.00 0.12 1004250.01 1000125.41 1042173.50 00:12:27.528 00:12:27.786 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:27.786 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 75199 00:12:27.786 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (75199) - No such process 00:12:27.786 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 75199 00:12:27.786 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:27.786 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:27.786 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.786 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:28.045 rmmod nvme_tcp 00:12:28.045 rmmod nvme_fabrics 00:12:28.045 rmmod nvme_keyring 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 75106 ']' 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 75106 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 75106 ']' 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 75106 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75106 00:12:28.045 killing process with pid 75106 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75106' 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 75106 00:12:28.045 [2024-05-13 18:25:43.827039] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:28.045 18:25:43 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 75106 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:28.304 00:12:28.304 real 0m9.237s 00:12:28.304 user 0m28.570s 00:12:28.304 sys 0m1.567s 00:12:28.304 ************************************ 00:12:28.304 END TEST nvmf_delete_subsystem 00:12:28.304 ************************************ 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:28.304 18:25:44 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:28.304 18:25:44 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:28.304 18:25:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:28.304 18:25:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:28.304 18:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.304 ************************************ 00:12:28.304 START TEST nvmf_ns_masking 00:12:28.304 ************************************ 00:12:28.304 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:28.304 * Looking for test storage... 00:12:28.304 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:28.304 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:28.304 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=b8c8cb48-df42-4a2d-b46a-cae8347ec288 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@432 -- # nvmf_veth_init 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:28.564 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:28.565 Cannot find device "nvmf_tgt_br" 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@155 -- # true 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:28.565 Cannot find device "nvmf_tgt_br2" 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@156 -- # true 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:28.565 Cannot find device "nvmf_tgt_br" 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@158 -- # true 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:28.565 Cannot find device "nvmf_tgt_br2" 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@159 -- # true 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:28.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:28.565 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:28.565 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:28.823 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:28.823 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:28.823 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:28.823 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:28.823 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:28.823 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:28.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:12:28.824 00:12:28.824 --- 10.0.0.2 ping statistics --- 00:12:28.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.824 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:28.824 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:28.824 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:12:28.824 00:12:28.824 --- 10.0.0.3 ping statistics --- 00:12:28.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.824 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:28.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:28.824 00:12:28.824 --- 10.0.0.1 ping statistics --- 00:12:28.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.824 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@433 -- # return 0 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=75435 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 75435 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 75435 ']' 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:28.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:28.824 18:25:44 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:28.824 [2024-05-13 18:25:44.726736] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:28.824 [2024-05-13 18:25:44.726847] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:29.098 [2024-05-13 18:25:44.863911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:29.098 [2024-05-13 18:25:45.010231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:29.098 [2024-05-13 18:25:45.010561] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:29.098 [2024-05-13 18:25:45.010720] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:29.098 [2024-05-13 18:25:45.010849] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:29.098 [2024-05-13 18:25:45.010883] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:29.098 [2024-05-13 18:25:45.011050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.098 [2024-05-13 18:25:45.011360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:29.098 [2024-05-13 18:25:45.011369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.098 [2024-05-13 18:25:45.011392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.034 18:25:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:30.034 18:25:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:12:30.034 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.034 18:25:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.034 18:25:45 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:30.034 18:25:45 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.034 18:25:45 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:30.292 [2024-05-13 18:25:46.076147] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.292 18:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:12:30.292 18:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:12:30.292 18:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:30.551 Malloc1 00:12:30.551 18:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:30.810 Malloc2 00:12:30.810 18:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:31.069 18:25:46 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:31.326 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.584 [2024-05-13 18:25:47.362616] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:31.584 [2024-05-13 18:25:47.362924] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.584 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:12:31.584 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b8c8cb48-df42-4a2d-b46a-cae8347ec288 -a 10.0.0.2 -s 4420 -i 4 00:12:31.584 18:25:47 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.584 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:31.584 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.584 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:12:31.584 18:25:47 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:34.114 [ 0]:0x1 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ab5de1e7184d4a7caef1a243f3a1ecde 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ab5de1e7184d4a7caef1a243f3a1ecde != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:34.114 [ 0]:0x1 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ab5de1e7184d4a7caef1a243f3a1ecde 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ab5de1e7184d4a7caef1a243f3a1ecde != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:34.114 [ 1]:0x2 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:34.114 18:25:49 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:34.115 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ba014ec7307048408c064b6a13a086ae 00:12:34.115 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ba014ec7307048408c064b6a13a086ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:34.115 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:12:34.115 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:34.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.373 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.631 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b8c8cb48-df42-4a2d-b46a-cae8347ec288 -a 10.0.0.2 -s 4420 -i 4 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:12:34.890 18:25:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:37.419 [ 0]:0x2 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ba014ec7307048408c064b6a13a086ae 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ba014ec7307048408c064b6a13a086ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.419 18:25:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:37.419 [ 0]:0x1 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ab5de1e7184d4a7caef1a243f3a1ecde 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ab5de1e7184d4a7caef1a243f3a1ecde != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:37.419 [ 1]:0x2 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:37.419 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.677 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ba014ec7307048408c064b6a13a086ae 00:12:37.677 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ba014ec7307048408c064b6a13a086ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.677 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:37.938 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:12:37.938 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:37.938 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:37.938 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:37.938 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:37.939 [ 0]:0x2 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ba014ec7307048408c064b6a13a086ae 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ba014ec7307048408c064b6a13a086ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.939 18:25:53 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:38.196 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:12:38.196 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b8c8cb48-df42-4a2d-b46a-cae8347ec288 -a 10.0.0.2 -s 4420 -i 4 00:12:38.455 18:25:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:38.455 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:12:38.455 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.455 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:12:38.455 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:12:38.455 18:25:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.365 [ 0]:0x1 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.365 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ab5de1e7184d4a7caef1a243f3a1ecde 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ab5de1e7184d4a7caef1a243f3a1ecde != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:40.623 [ 1]:0x2 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ba014ec7307048408c064b6a13a086ae 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ba014ec7307048408c064b6a13a086ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.623 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:40.881 [ 0]:0x2 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ba014ec7307048408c064b6a13a086ae 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ba014ec7307048408c064b6a13a086ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:40.881 18:25:56 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:41.447 [2024-05-13 18:25:57.105382] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:41.447 2024/05/13 18:25:57 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:12:41.447 request: 00:12:41.447 { 00:12:41.447 "method": "nvmf_ns_remove_host", 00:12:41.447 "params": { 00:12:41.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:41.447 "nsid": 2, 00:12:41.447 "host": "nqn.2016-06.io.spdk:host1" 00:12:41.447 } 00:12:41.447 } 00:12:41.447 Got JSON-RPC error response 00:12:41.447 GoRPCClient: error on JSON-RPC call 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:12:41.447 [ 0]:0x2 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=ba014ec7307048408c064b6a13a086ae 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ ba014ec7307048408c064b6a13a086ae != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.447 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:41.707 rmmod nvme_tcp 00:12:41.707 rmmod nvme_fabrics 00:12:41.707 rmmod nvme_keyring 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 75435 ']' 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 75435 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 75435 ']' 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 75435 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75435 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:41.707 killing process with pid 75435 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75435' 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 75435 00:12:41.707 [2024-05-13 18:25:57.613775] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:41.707 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 75435 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:42.275 00:12:42.275 real 0m13.792s 00:12:42.275 user 0m54.995s 00:12:42.275 sys 0m2.381s 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:42.275 18:25:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:42.275 ************************************ 00:12:42.275 END TEST nvmf_ns_masking 00:12:42.275 ************************************ 00:12:42.275 18:25:58 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:12:42.275 18:25:58 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:42.275 18:25:58 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:42.275 18:25:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:42.275 18:25:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:42.275 18:25:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:42.275 ************************************ 00:12:42.275 START TEST nvmf_vfio_user 00:12:42.275 ************************************ 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:42.275 * Looking for test storage... 00:12:42.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=75897 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 75897' 00:12:42.275 Process pid: 75897 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 75897 00:12:42.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 75897 ']' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:42.275 18:25:58 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:42.275 [2024-05-13 18:25:58.192161] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:42.275 [2024-05-13 18:25:58.192460] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.534 [2024-05-13 18:25:58.327160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.534 [2024-05-13 18:25:58.449287] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.534 [2024-05-13 18:25:58.449375] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.534 [2024-05-13 18:25:58.449402] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.534 [2024-05-13 18:25:58.449411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.534 [2024-05-13 18:25:58.449418] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.534 [2024-05-13 18:25:58.449557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.534 [2024-05-13 18:25:58.450932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.534 [2024-05-13 18:25:58.451096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.534 [2024-05-13 18:25:58.451102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.468 18:25:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:43.468 18:25:59 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:12:43.468 18:25:59 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:44.401 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:44.660 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:44.660 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:44.660 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:44.660 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:44.660 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:44.918 Malloc1 00:12:44.918 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:45.176 18:26:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:45.434 18:26:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:45.692 [2024-05-13 18:26:01.494721] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:45.692 18:26:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:45.692 18:26:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:45.692 18:26:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:45.957 Malloc2 00:12:45.957 18:26:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:46.252 18:26:01 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:46.510 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:46.770 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:46.770 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:46.770 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:46.770 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:46.770 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:46.770 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:46.770 [2024-05-13 18:26:02.518875] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:12:46.770 [2024-05-13 18:26:02.518951] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76032 ] 00:12:46.770 [2024-05-13 18:26:02.654679] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:46.770 [2024-05-13 18:26:02.667977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:46.770 [2024-05-13 18:26:02.668013] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fa864b22000 00:12:46.770 [2024-05-13 18:26:02.668961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.669955] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.670953] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.671959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.672961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.673959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.674959] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.675961] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:46.770 [2024-05-13 18:26:02.676971] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:46.770 [2024-05-13 18:26:02.677005] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fa86421f000 00:12:46.770 [2024-05-13 18:26:02.678248] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:46.770 [2024-05-13 18:26:02.694546] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:46.770 [2024-05-13 18:26:02.694595] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:46.770 [2024-05-13 18:26:02.697064] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:46.771 [2024-05-13 18:26:02.697127] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:46.771 [2024-05-13 18:26:02.697230] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:46.771 [2024-05-13 18:26:02.697255] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:46.771 [2024-05-13 18:26:02.697261] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:46.771 [2024-05-13 18:26:02.698037] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:46.771 [2024-05-13 18:26:02.698065] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:46.771 [2024-05-13 18:26:02.698076] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:46.771 [2024-05-13 18:26:02.699038] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:46.771 [2024-05-13 18:26:02.699063] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:46.771 [2024-05-13 18:26:02.699075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:46.771 [2024-05-13 18:26:02.700040] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:46.771 [2024-05-13 18:26:02.700063] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:46.771 [2024-05-13 18:26:02.701044] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:46.771 [2024-05-13 18:26:02.701068] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:46.771 [2024-05-13 18:26:02.701075] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:46.771 [2024-05-13 18:26:02.701085] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:46.771 [2024-05-13 18:26:02.701191] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:46.771 [2024-05-13 18:26:02.701197] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:46.771 [2024-05-13 18:26:02.701202] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:46.771 [2024-05-13 18:26:02.702054] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:46.771 [2024-05-13 18:26:02.703053] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:46.771 [2024-05-13 18:26:02.704057] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:46.771 [2024-05-13 18:26:02.705052] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.771 [2024-05-13 18:26:02.705148] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:46.771 [2024-05-13 18:26:02.708587] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:46.771 [2024-05-13 18:26:02.708613] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:46.771 [2024-05-13 18:26:02.708620] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708643] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:46.771 [2024-05-13 18:26:02.708659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708675] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:46.771 [2024-05-13 18:26:02.708681] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:46.771 [2024-05-13 18:26:02.708697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.708750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.708762] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:46.771 [2024-05-13 18:26:02.708767] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:46.771 [2024-05-13 18:26:02.708772] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:46.771 [2024-05-13 18:26:02.708777] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:46.771 [2024-05-13 18:26:02.708782] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:46.771 [2024-05-13 18:26:02.708788] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:46.771 [2024-05-13 18:26:02.708793] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708802] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.708834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.708846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.771 [2024-05-13 18:26:02.708855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.771 [2024-05-13 18:26:02.708873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.771 [2024-05-13 18:26:02.708883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:46.771 [2024-05-13 18:26:02.708889] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708903] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.708925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.708932] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:46.771 [2024-05-13 18:26:02.708937] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708948] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708955] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.708965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.708975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709030] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709050] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:46.771 [2024-05-13 18:26:02.709055] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:46.771 [2024-05-13 18:26:02.709062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709093] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:46.771 [2024-05-13 18:26:02.709118] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709128] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709136] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:46.771 [2024-05-13 18:26:02.709140] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:46.771 [2024-05-13 18:26:02.709147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709192] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709202] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709210] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:46.771 [2024-05-13 18:26:02.709215] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:46.771 [2024-05-13 18:26:02.709222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709253] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709264] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709271] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709276] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709282] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:46.771 [2024-05-13 18:26:02.709287] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:46.771 [2024-05-13 18:26:02.709292] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:46.771 [2024-05-13 18:26:02.709324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709407] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:46.771 [2024-05-13 18:26:02.709432] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:46.771 [2024-05-13 18:26:02.709437] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:46.771 [2024-05-13 18:26:02.709441] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:46.771 [2024-05-13 18:26:02.709445] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:46.771 [2024-05-13 18:26:02.709451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:46.771 [2024-05-13 18:26:02.709459] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:46.771 [2024-05-13 18:26:02.709464] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:46.771 [2024-05-13 18:26:02.709471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:46.771 [2024-05-13 18:26:02.709478] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:46.771 [2024-05-13 18:26:02.709483] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:46.771 [2024-05-13 18:26:02.709489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:46.772 [2024-05-13 18:26:02.709501] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:46.772 [2024-05-13 18:26:02.709506] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:46.772 ===================================================== 00:12:46.772 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:46.772 ===================================================== 00:12:46.772 Controller Capabilities/Features 00:12:46.772 ================================ 00:12:46.772 Vendor ID: 4e58 00:12:46.772 Subsystem Vendor ID: 4e58 00:12:46.772 Serial Number: SPDK1 00:12:46.772 Model Number: SPDK bdev Controller 00:12:46.772 Firmware Version: 24.05 00:12:46.772 Recommended Arb Burst: 6 00:12:46.772 IEEE OUI Identifier: 8d 6b 50 00:12:46.772 Multi-path I/O 00:12:46.772 May have multiple subsystem ports: Yes 00:12:46.772 May have multiple controllers: Yes 00:12:46.772 Associated with SR-IOV VF: No 00:12:46.772 Max Data Transfer Size: 131072 00:12:46.772 Max Number of Namespaces: 32 00:12:46.772 Max Number of I/O Queues: 127 00:12:46.772 NVMe Specification Version (VS): 1.3 00:12:46.772 NVMe Specification Version (Identify): 1.3 00:12:46.772 Maximum Queue Entries: 256 00:12:46.772 Contiguous Queues Required: Yes 00:12:46.772 Arbitration Mechanisms Supported 00:12:46.772 Weighted Round Robin: Not Supported 00:12:46.772 Vendor Specific: Not Supported 00:12:46.772 Reset Timeout: 15000 ms 00:12:46.772 Doorbell Stride: 4 bytes 00:12:46.772 NVM Subsystem Reset: Not Supported 00:12:46.772 Command Sets Supported 00:12:46.772 NVM Command Set: Supported 00:12:46.772 Boot Partition: Not Supported 00:12:46.772 Memory Page Size Minimum: 4096 bytes 00:12:46.772 Memory Page Size Maximum: 4096 bytes 00:12:46.772 Persistent Memory Region: Not Supported 00:12:46.772 Optional Asynchronous Events Supported 00:12:46.772 Namespace Attribute Notices: Supported 00:12:46.772 Firmware Activation Notices: Not Supported 00:12:46.772 ANA Change Notices: Not Supported 00:12:46.772 PLE Aggregate Log Change Notices: Not Supported 00:12:46.772 LBA Status Info Alert Notices: Not Supported 00:12:46.772 EGE Aggregate Log Change Notices: Not Supported 00:12:46.772 Normal NVM Subsystem Shutdown event: Not Supported 00:12:46.772 Zone Descriptor Change Notices: Not Supported 00:12:46.772 Discovery Log Change Notices: Not Supported 00:12:46.772 Controller Attributes 00:12:46.772 128-bit Host Identifier: Supported 00:12:46.772 Non-Operational Permissive Mode: Not Supported 00:12:46.772 NVM Sets: Not Supported 00:12:46.772 Read Recovery Levels: Not Supported 00:12:46.772 Endurance Groups: Not Supported 00:12:46.772 Predictable Latency Mode: Not Supported 00:12:46.772 Traffic Based Keep ALive: Not Supported 00:12:46.772 Namespace Granularity: Not Supported 00:12:46.772 SQ Associations: Not Supported 00:12:46.772 UUID List: Not Supported 00:12:46.772 Multi-Domain Subsystem: Not Supported 00:12:46.772 Fixed Capacity Management: Not Supported 00:12:46.772 Variable Capacity Management: Not Supported 00:12:46.772 Delete Endurance Group: Not Supported 00:12:46.772 Delete NVM Set: Not Supported 00:12:46.772 Extended LBA Formats Supported: Not Supported 00:12:46.772 Flexible Data Placement Supported: Not Supported 00:12:46.772 00:12:46.772 Controller Memory Buffer Support 00:12:46.772 ================================ 00:12:46.772 Supported: No 00:12:46.772 00:12:46.772 Persistent Memory Region Support 00:12:46.772 ================================ 00:12:46.772 Supported: No 00:12:46.772 00:12:46.772 Admin Command Set Attributes 00:12:46.772 ============================ 00:12:46.772 Security Send/Receive: Not Supported 00:12:46.772 Format NVM: Not Supported 00:12:46.772 Firmware Activate/Download: Not Supported 00:12:46.772 Namespace Management: Not Supported 00:12:46.772 Device Self-Test: Not Supported 00:12:46.772 Directives: Not Supported 00:12:46.772 NVMe-MI: Not Supported 00:12:46.772 Virtualization Management: Not Supported 00:12:46.772 Doorbell Buffer Config: Not Supported 00:12:46.772 Get LBA Status Capability: Not Supported 00:12:46.772 Command & Feature Lockdown Capability: Not Supported 00:12:46.772 Abort Command Limit: 4 00:12:46.772 Async Event Request Limit: 4 00:12:46.772 Number of Firmware Slots: N/A 00:12:46.772 Firmware Slot 1 Read-Only: N/A 00:12:46.772 Firmware Activation Without Reset: N/A 00:12:46.772 Multiple Update Detection Support: N/A 00:12:46.772 Firmware Update Granularity: No Information Provided 00:12:46.772 Per-Namespace SMART Log: No 00:12:46.772 Asymmetric Namespace Access Log Page: Not Supported 00:12:46.772 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:46.772 Command Effects Log Page: Supported 00:12:46.772 Get Log Page Extended Data: Supported 00:12:46.772 Telemetry Log Pages: Not Supported 00:12:46.772 Persistent Event Log Pages: Not Supported 00:12:46.772 Supported Log Pages Log Page: May Support 00:12:46.772 Commands Supported & Effects Log Page: Not Supported 00:12:46.772 Feature Identifiers & Effects Log Page:May Support 00:12:46.772 NVMe-MI Commands & Effects Log Page: May Support 00:12:46.772 Data Area 4 for Telemetry Log: Not Supported 00:12:46.772 Error Log Page Entries Supported: 128 00:12:46.772 Keep Alive: Supported 00:12:46.772 Keep Alive Granularity: 10000 ms 00:12:46.772 00:12:46.772 NVM Command Set Attributes 00:12:46.772 ========================== 00:12:46.772 Submission Queue Entry Size 00:12:46.772 Max: 64 00:12:46.772 Min: 64 00:12:46.772 Completion Queue Entry Size 00:12:46.772 Max: 16 00:12:46.772 Min: 16 00:12:46.772 Number of Namespaces: 32 00:12:46.772 Compare Command: Supported 00:12:46.772 Write Uncorrectable Command: Not Supported 00:12:46.772 Dataset Management Command: Supported 00:12:46.772 Write Zeroes Command: Supported 00:12:46.772 Set Features Save Field: Not Supported 00:12:46.772 Reservations: Not Supported 00:12:46.772 Timestamp: Not Supported 00:12:46.772 Copy: Supported 00:12:46.772 Volatile Write Cache: Present 00:12:46.772 Atomic Write Unit (Normal): 1 00:12:46.772 Atomic Write Unit (PFail): 1 00:12:46.772 Atomic Compare & Write Unit: 1 00:12:46.772 Fused Compare & Write: Supported 00:12:46.772 Scatter-Gather List 00:12:46.772 SGL Command Set: Supported (Dword aligned) 00:12:46.772 SGL Keyed: Not Supported 00:12:46.772 SGL Bit Bucket Descriptor: Not Supported 00:12:46.772 SGL Metadata Pointer: Not Supported 00:12:46.772 Oversized SGL: Not Supported 00:12:46.772 SGL Metadata Address: Not Supported 00:12:46.772 SGL Offset: Not Supported 00:12:46.772 Transport SGL Data Block: Not Supported 00:12:46.772 Replay Protected Memory Block: Not Supported 00:12:46.772 00:12:46.772 Firmware Slot Information 00:12:46.772 ========================= 00:12:46.772 Active slot: 1 00:12:46.772 Slot 1 Firmware Revision: 24.05 00:12:46.772 00:12:46.772 00:12:46.772 Commands Supported and Effects 00:12:46.772 ============================== 00:12:46.772 Admin Commands 00:12:46.772 -------------- 00:12:46.772 Get Log Page (02h): Supported 00:12:46.772 Identify (06h): Supported 00:12:46.772 Abort (08h): Supported 00:12:46.772 Set Features (09h): Supported 00:12:46.772 Get Features (0Ah): Supported 00:12:46.772 Asynchronous Event Request (0Ch): Supported 00:12:46.772 Keep Alive (18h): Supported 00:12:46.772 I/O Commands 00:12:46.772 ------------ 00:12:46.772 Flush (00h): Supported LBA-Change 00:12:46.772 Write (01h): Supported LBA-Change 00:12:46.772 Read (02h): Supported 00:12:46.772 Compare (05h): Supported 00:12:46.772 Write Zeroes (08h): Supported LBA-Change 00:12:46.772 Dataset Management (09h): Supported LBA-Change 00:12:46.772 Copy (19h): Supported LBA-Change 00:12:46.772 Unknown (79h): Supported LBA-Change 00:12:46.772 Unknown (7Ah): Supported 00:12:46.772 00:12:46.772 Error Log 00:12:46.772 ========= 00:12:46.772 00:12:46.772 Arbitration 00:12:46.772 =========== 00:12:46.772 Arbitration Burst: 1 00:12:46.772 00:12:46.772 Power Management 00:12:46.772 ================ 00:12:46.772 Number of Power States: 1 00:12:46.772 Current Power State: Power State #0 00:12:46.772 Power State #0: 00:12:46.772 Max Power: 0.00 W 00:12:46.772 Non-Operational State: Operational 00:12:46.772 Entry Latency: Not Reported 00:12:46.772 Exit Latency: Not Reported 00:12:46.772 Relative Read Throughput: 0 00:12:46.772 Relative Read Latency: 0 00:12:46.772 Relative Write Throughput: 0 00:12:46.772 Relative Write Latency: 0 00:12:46.772 Idle Power: Not Reported 00:12:46.772 Active Power: Not Reported 00:12:46.772 Non-Operational Permissive Mode: Not Supported 00:12:46.772 00:12:46.772 Health Information 00:12:46.772 ================== 00:12:46.772 Critical Warnings: 00:12:46.772 Available Spare Space: OK 00:12:46.772 Temperature: OK 00:12:46.772 Device Reliability: OK 00:12:46.772 Read Only: No 00:12:46.772 Volatile Memory Backup: OK 00:12:46.772 Current Temperature: 0 Kelvin (-2[2024-05-13 18:26:02.709513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:46.772 [2024-05-13 18:26:02.709520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709715] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:46.772 [2024-05-13 18:26:02.709727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709769] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:46.772 [2024-05-13 18:26:02.709782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.709804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:46.772 [2024-05-13 18:26:02.710086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:46.772 [2024-05-13 18:26:02.710101] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:46.772 [2024-05-13 18:26:02.711082] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.772 [2024-05-13 18:26:02.711161] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:46.772 [2024-05-13 18:26:02.711172] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:47.031 [2024-05-13 18:26:02.712086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:47.031 [2024-05-13 18:26:02.712115] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:47.031 [2024-05-13 18:26:02.712269] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:47.031 [2024-05-13 18:26:02.714588] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.031 73 Celsius) 00:12:47.031 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:47.031 Available Spare: 0% 00:12:47.031 Available Spare Threshold: 0% 00:12:47.031 Life Percentage Used: 0% 00:12:47.031 Data Units Read: 0 00:12:47.031 Data Units Written: 0 00:12:47.031 Host Read Commands: 0 00:12:47.031 Host Write Commands: 0 00:12:47.031 Controller Busy Time: 0 minutes 00:12:47.031 Power Cycles: 0 00:12:47.031 Power On Hours: 0 hours 00:12:47.031 Unsafe Shutdowns: 0 00:12:47.031 Unrecoverable Media Errors: 0 00:12:47.031 Lifetime Error Log Entries: 0 00:12:47.031 Warning Temperature Time: 0 minutes 00:12:47.031 Critical Temperature Time: 0 minutes 00:12:47.031 00:12:47.031 Number of Queues 00:12:47.031 ================ 00:12:47.031 Number of I/O Submission Queues: 127 00:12:47.031 Number of I/O Completion Queues: 127 00:12:47.031 00:12:47.031 Active Namespaces 00:12:47.031 ================= 00:12:47.031 Namespace ID:1 00:12:47.031 Error Recovery Timeout: Unlimited 00:12:47.031 Command Set Identifier: NVM (00h) 00:12:47.031 Deallocate: Supported 00:12:47.031 Deallocated/Unwritten Error: Not Supported 00:12:47.031 Deallocated Read Value: Unknown 00:12:47.031 Deallocate in Write Zeroes: Not Supported 00:12:47.031 Deallocated Guard Field: 0xFFFF 00:12:47.031 Flush: Supported 00:12:47.031 Reservation: Supported 00:12:47.031 Namespace Sharing Capabilities: Multiple Controllers 00:12:47.031 Size (in LBAs): 131072 (0GiB) 00:12:47.031 Capacity (in LBAs): 131072 (0GiB) 00:12:47.031 Utilization (in LBAs): 131072 (0GiB) 00:12:47.031 NGUID: 16AF210AB7344AE8AEE2636949E5FA61 00:12:47.031 UUID: 16af210a-b734-4ae8-aee2-636949e5fa61 00:12:47.031 Thin Provisioning: Not Supported 00:12:47.031 Per-NS Atomic Units: Yes 00:12:47.031 Atomic Boundary Size (Normal): 0 00:12:47.031 Atomic Boundary Size (PFail): 0 00:12:47.031 Atomic Boundary Offset: 0 00:12:47.031 Maximum Single Source Range Length: 65535 00:12:47.031 Maximum Copy Length: 65535 00:12:47.031 Maximum Source Range Count: 1 00:12:47.031 NGUID/EUI64 Never Reused: No 00:12:47.031 Namespace Write Protected: No 00:12:47.031 Number of LBA Formats: 1 00:12:47.031 Current LBA Format: LBA Format #00 00:12:47.031 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:47.031 00:12:47.031 18:26:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:47.289 [2024-05-13 18:26:03.040527] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:52.590 Initializing NVMe Controllers 00:12:52.590 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:52.590 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:52.590 Initialization complete. Launching workers. 00:12:52.590 ======================================================== 00:12:52.590 Latency(us) 00:12:52.590 Device Information : IOPS MiB/s Average min max 00:12:52.590 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35769.19 139.72 3578.87 1108.24 9862.91 00:12:52.590 ======================================================== 00:12:52.590 Total : 35769.19 139.72 3578.87 1108.24 9862.91 00:12:52.590 00:12:52.590 [2024-05-13 18:26:08.050541] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:52.590 18:26:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:52.590 [2024-05-13 18:26:08.379761] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:57.908 Initializing NVMe Controllers 00:12:57.908 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:57.908 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:57.908 Initialization complete. Launching workers. 00:12:57.908 ======================================================== 00:12:57.908 Latency(us) 00:12:57.908 Device Information : IOPS MiB/s Average min max 00:12:57.908 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15986.56 62.45 8014.37 3966.79 17642.17 00:12:57.908 ======================================================== 00:12:57.908 Total : 15986.56 62.45 8014.37 3966.79 17642.17 00:12:57.908 00:12:57.908 [2024-05-13 18:26:13.412055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:57.908 18:26:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:57.908 [2024-05-13 18:26:13.687713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:03.240 [2024-05-13 18:26:18.744897] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:03.240 Initializing NVMe Controllers 00:13:03.240 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:03.240 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:03.240 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:03.240 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:03.240 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:03.240 Initialization complete. Launching workers. 00:13:03.240 Starting thread on core 2 00:13:03.240 Starting thread on core 3 00:13:03.240 Starting thread on core 1 00:13:03.240 18:26:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:03.240 [2024-05-13 18:26:19.082589] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.541 [2024-05-13 18:26:22.148237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.541 Initializing NVMe Controllers 00:13:06.541 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.541 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:06.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:06.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:06.541 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:06.541 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:06.541 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:06.541 Initialization complete. Launching workers. 00:13:06.541 Starting thread on core 1 with urgent priority queue 00:13:06.541 Starting thread on core 2 with urgent priority queue 00:13:06.541 Starting thread on core 3 with urgent priority queue 00:13:06.541 Starting thread on core 0 with urgent priority queue 00:13:06.541 SPDK bdev Controller (SPDK1 ) core 0: 6020.00 IO/s 16.61 secs/100000 ios 00:13:06.541 SPDK bdev Controller (SPDK1 ) core 1: 7006.33 IO/s 14.27 secs/100000 ios 00:13:06.541 SPDK bdev Controller (SPDK1 ) core 2: 7203.33 IO/s 13.88 secs/100000 ios 00:13:06.541 SPDK bdev Controller (SPDK1 ) core 3: 5948.00 IO/s 16.81 secs/100000 ios 00:13:06.541 ======================================================== 00:13:06.541 00:13:06.541 18:26:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:06.803 [2024-05-13 18:26:22.470649] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.803 Initializing NVMe Controllers 00:13:06.803 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.803 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:06.803 Namespace ID: 1 size: 0GB 00:13:06.803 Initialization complete. 00:13:06.803 INFO: using host memory buffer for IO 00:13:06.803 Hello world! 00:13:06.803 [2024-05-13 18:26:22.504183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.803 18:26:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:07.061 [2024-05-13 18:26:22.825611] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:07.995 Initializing NVMe Controllers 00:13:07.995 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:07.995 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:07.995 Initialization complete. Launching workers. 00:13:07.995 submit (in ns) avg, min, max = 9491.5, 3540.0, 4041840.0 00:13:07.995 complete (in ns) avg, min, max = 23004.8, 2089.1, 4074048.2 00:13:07.995 00:13:07.995 Submit histogram 00:13:07.995 ================ 00:13:07.995 Range in us Cumulative Count 00:13:07.995 3.535 - 3.549: 0.0148% ( 2) 00:13:07.995 3.549 - 3.564: 0.0371% ( 3) 00:13:07.995 3.564 - 3.578: 0.0593% ( 3) 00:13:07.995 3.578 - 3.593: 0.1112% ( 7) 00:13:07.995 3.593 - 3.607: 0.2223% ( 15) 00:13:07.995 3.607 - 3.622: 0.2890% ( 9) 00:13:07.995 3.622 - 3.636: 0.4150% ( 17) 00:13:07.995 3.636 - 3.651: 0.5854% ( 23) 00:13:07.995 3.651 - 3.665: 0.8522% ( 36) 00:13:07.995 3.665 - 3.680: 1.2079% ( 48) 00:13:07.995 3.680 - 3.695: 1.7637% ( 75) 00:13:07.995 3.695 - 3.709: 2.3640% ( 81) 00:13:07.995 3.709 - 3.724: 2.9420% ( 78) 00:13:07.995 3.724 - 3.753: 4.9133% ( 266) 00:13:07.995 3.753 - 3.782: 11.6422% ( 908) 00:13:07.995 3.782 - 3.811: 26.1746% ( 1961) 00:13:07.995 3.811 - 3.840: 46.8875% ( 2795) 00:13:07.995 3.840 - 3.869: 69.7421% ( 3084) 00:13:07.995 3.869 - 3.898: 80.6655% ( 1474) 00:13:07.995 3.898 - 3.927: 85.0748% ( 595) 00:13:07.995 3.927 - 3.956: 87.9131% ( 383) 00:13:07.995 3.956 - 3.985: 89.0989% ( 160) 00:13:07.995 3.985 - 4.015: 90.0474% ( 128) 00:13:07.995 4.015 - 4.044: 90.7218% ( 91) 00:13:07.995 4.044 - 4.073: 91.5222% ( 108) 00:13:07.995 4.073 - 4.102: 93.3155% ( 242) 00:13:07.995 4.102 - 4.131: 95.8797% ( 346) 00:13:07.995 4.131 - 4.160: 97.2951% ( 191) 00:13:07.995 4.160 - 4.189: 97.8731% ( 78) 00:13:07.995 4.189 - 4.218: 98.2066% ( 45) 00:13:07.995 4.218 - 4.247: 98.3548% ( 20) 00:13:07.995 4.247 - 4.276: 98.3919% ( 5) 00:13:07.995 4.276 - 4.305: 98.4363% ( 6) 00:13:07.995 4.305 - 4.335: 98.4512% ( 2) 00:13:07.995 4.335 - 4.364: 98.4586% ( 1) 00:13:07.995 4.364 - 4.393: 98.4734% ( 2) 00:13:07.995 4.393 - 4.422: 98.5030% ( 4) 00:13:07.995 4.422 - 4.451: 98.5401% ( 5) 00:13:07.995 4.451 - 4.480: 98.5771% ( 5) 00:13:07.995 4.480 - 4.509: 98.6216% ( 6) 00:13:07.995 4.509 - 4.538: 98.6883% ( 9) 00:13:07.995 4.538 - 4.567: 98.7179% ( 4) 00:13:07.995 4.567 - 4.596: 98.7624% ( 6) 00:13:07.995 4.596 - 4.625: 98.8291% ( 9) 00:13:07.995 4.625 - 4.655: 98.8884% ( 8) 00:13:07.995 4.655 - 4.684: 98.9180% ( 4) 00:13:07.995 4.684 - 4.713: 98.9477% ( 4) 00:13:07.995 4.713 - 4.742: 98.9773% ( 4) 00:13:07.995 4.742 - 4.771: 99.0070% ( 4) 00:13:07.995 4.771 - 4.800: 99.0366% ( 4) 00:13:07.995 4.800 - 4.829: 99.0588% ( 3) 00:13:07.995 4.829 - 4.858: 99.0811% ( 3) 00:13:07.995 4.858 - 4.887: 99.0885% ( 1) 00:13:07.995 4.887 - 4.916: 99.1255% ( 5) 00:13:07.995 4.916 - 4.945: 99.1404% ( 2) 00:13:07.995 4.945 - 4.975: 99.1478% ( 1) 00:13:07.995 4.975 - 5.004: 99.1552% ( 1) 00:13:07.995 5.033 - 5.062: 99.1774% ( 3) 00:13:07.995 5.062 - 5.091: 99.1848% ( 1) 00:13:07.995 5.091 - 5.120: 99.1996% ( 2) 00:13:07.995 5.120 - 5.149: 99.2219% ( 3) 00:13:07.995 5.149 - 5.178: 99.2367% ( 2) 00:13:07.995 5.236 - 5.265: 99.2441% ( 1) 00:13:07.995 5.295 - 5.324: 99.2515% ( 1) 00:13:07.995 5.324 - 5.353: 99.2589% ( 1) 00:13:07.995 5.382 - 5.411: 99.2663% ( 1) 00:13:07.995 5.411 - 5.440: 99.2812% ( 2) 00:13:07.995 5.527 - 5.556: 99.2886% ( 1) 00:13:07.995 5.615 - 5.644: 99.2960% ( 1) 00:13:07.996 5.644 - 5.673: 99.3034% ( 1) 00:13:07.996 5.673 - 5.702: 99.3108% ( 1) 00:13:07.996 5.818 - 5.847: 99.3182% ( 1) 00:13:07.996 6.051 - 6.080: 99.3256% ( 1) 00:13:07.996 6.895 - 6.924: 99.3330% ( 1) 00:13:07.996 8.145 - 8.204: 99.3404% ( 1) 00:13:07.996 8.378 - 8.436: 99.3479% ( 1) 00:13:07.996 9.018 - 9.076: 99.3553% ( 1) 00:13:07.996 9.309 - 9.367: 99.3627% ( 1) 00:13:07.996 9.367 - 9.425: 99.3701% ( 1) 00:13:07.996 9.484 - 9.542: 99.3775% ( 1) 00:13:07.996 9.600 - 9.658: 99.3849% ( 1) 00:13:07.996 9.658 - 9.716: 99.3923% ( 1) 00:13:07.996 9.716 - 9.775: 99.4071% ( 2) 00:13:07.996 9.775 - 9.833: 99.4146% ( 1) 00:13:07.996 9.833 - 9.891: 99.4220% ( 1) 00:13:07.996 10.007 - 10.065: 99.4368% ( 2) 00:13:07.996 10.065 - 10.124: 99.4442% ( 1) 00:13:07.996 10.182 - 10.240: 99.4590% ( 2) 00:13:07.996 10.240 - 10.298: 99.4664% ( 1) 00:13:07.996 10.356 - 10.415: 99.4813% ( 2) 00:13:07.996 10.473 - 10.531: 99.5183% ( 5) 00:13:07.996 10.589 - 10.647: 99.5405% ( 3) 00:13:07.996 10.647 - 10.705: 99.5628% ( 3) 00:13:07.996 10.705 - 10.764: 99.5702% ( 1) 00:13:07.996 10.764 - 10.822: 99.5850% ( 2) 00:13:07.996 10.880 - 10.938: 99.5924% ( 1) 00:13:07.996 10.938 - 10.996: 99.6072% ( 2) 00:13:07.996 10.996 - 11.055: 99.6221% ( 2) 00:13:07.996 11.055 - 11.113: 99.6369% ( 2) 00:13:07.996 11.113 - 11.171: 99.6591% ( 3) 00:13:07.996 11.287 - 11.345: 99.6665% ( 1) 00:13:07.996 11.404 - 11.462: 99.6739% ( 1) 00:13:07.996 11.753 - 11.811: 99.6813% ( 1) 00:13:07.996 12.218 - 12.276: 99.6888% ( 1) 00:13:07.996 12.276 - 12.335: 99.6962% ( 1) 00:13:07.996 12.335 - 12.393: 99.7036% ( 1) 00:13:07.996 12.393 - 12.451: 99.7110% ( 1) 00:13:07.996 12.509 - 12.567: 99.7184% ( 1) 00:13:07.996 13.615 - 13.673: 99.7258% ( 1) 00:13:07.996 13.731 - 13.789: 99.7332% ( 1) 00:13:07.996 14.255 - 14.313: 99.7406% ( 1) 00:13:07.996 18.153 - 18.269: 99.7629% ( 3) 00:13:07.996 18.385 - 18.502: 99.7703% ( 1) 00:13:07.996 18.502 - 18.618: 99.7777% ( 1) 00:13:07.996 18.851 - 18.967: 99.7851% ( 1) 00:13:07.996 19.316 - 19.433: 99.7925% ( 1) 00:13:07.996 19.549 - 19.665: 99.8073% ( 2) 00:13:07.996 19.898 - 20.015: 99.8221% ( 2) 00:13:07.996 20.480 - 20.596: 99.8370% ( 2) 00:13:07.996 20.596 - 20.713: 99.8444% ( 1) 00:13:07.996 20.829 - 20.945: 99.8518% ( 1) 00:13:07.996 25.716 - 25.833: 99.8592% ( 1) 00:13:07.996 3038.487 - 3053.382: 99.8666% ( 1) 00:13:07.996 3991.738 - 4021.527: 99.9926% ( 17) 00:13:07.996 4021.527 - 4051.316: 100.0000% ( 1) 00:13:07.996 00:13:07.996 Complete histogram 00:13:07.996 ================== 00:13:07.996 Range in us Cumulative Count 00:13:07.996 2.080 - 2.095: 0.0074% ( 1) 00:13:07.996 2.095 - 2.109: 0.3854% ( 51) 00:13:07.996 2.109 - 2.124: 0.6077% ( 30) 00:13:07.996 2.153 - 2.167: 1.1931% ( 79) 00:13:07.996 2.167 - 2.182: 4.5353% ( 451) 00:13:07.996 2.182 - 2.196: 5.9212% ( 187) 00:13:07.996 2.196 - 2.211: 5.9656% ( 6) 00:13:07.996 2.211 - 2.225: 6.0545% ( 12) 00:13:07.996 2.225 - 2.240: 9.6784% ( 489) 00:13:07.996 2.240 - 2.255: 68.7343% ( 7969) 00:13:07.996 2.255 - 2.269: 89.6398% ( 2821) 00:13:07.996 2.269 - 2.284: 90.7218% ( 146) 00:13:07.996 2.284 - 2.298: 91.4332% ( 96) 00:13:07.996 2.298 - 2.313: 94.0418% ( 352) 00:13:07.996 2.313 - 2.327: 95.7314% ( 228) 00:13:07.996 2.327 - 2.342: 96.6726% ( 127) 00:13:07.996 2.342 - 2.356: 97.3025% ( 85) 00:13:07.996 2.356 - 2.371: 97.7323% ( 58) 00:13:07.996 2.371 - 2.385: 97.9917% ( 35) 00:13:07.996 2.385 - 2.400: 98.1621% ( 23) 00:13:07.996 2.400 - 2.415: 98.4141% ( 34) 00:13:07.996 2.415 - 2.429: 98.5771% ( 22) 00:13:07.996 2.429 - 2.444: 98.6513% ( 10) 00:13:07.996 2.444 - 2.458: 98.7179% ( 9) 00:13:07.996 2.458 - 2.473: 98.7402% ( 3) 00:13:07.996 2.473 - 2.487: 98.7624% ( 3) 00:13:07.996 2.487 - 2.502: 98.7921% ( 4) 00:13:07.996 2.502 - 2.516: 98.8143% ( 3) 00:13:07.996 2.516 - 2.531: 98.8291% ( 2) 00:13:07.996 2.531 - 2.545: 98.8513% ( 3) 00:13:07.996 2.545 - 2.560: 98.8588% ( 1) 00:13:07.996 2.560 - 2.575: 98.8810% ( 3) 00:13:07.996 2.575 - 2.589: 98.8884% ( 1) 00:13:07.996 2.589 - 2.604: 98.8958% ( 1) 00:13:07.996 2.604 - 2.618: 98.9106% ( 2) 00:13:07.996 2.618 - 2.633: 98.9329% ( 3) 00:13:07.996 2.633 - 2.647: 98.9403% ( 1) 00:13:07.996 2.720 - 2.735: 98.9477% ( 1) 00:13:07.996 3.011 - 3.025: 98.9551% ( 1) 00:13:07.996 3.520 - 3.5[2024-05-13 18:26:23.841472] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:07.996 35: 98.9625% ( 1) 00:13:07.996 3.607 - 3.622: 98.9699% ( 1) 00:13:07.996 3.636 - 3.651: 98.9773% ( 1) 00:13:07.996 3.709 - 3.724: 98.9847% ( 1) 00:13:07.996 3.782 - 3.811: 98.9996% ( 2) 00:13:07.996 3.811 - 3.840: 99.0070% ( 1) 00:13:07.996 3.840 - 3.869: 99.0218% ( 2) 00:13:07.996 3.869 - 3.898: 99.0292% ( 1) 00:13:07.996 3.927 - 3.956: 99.0440% ( 2) 00:13:07.996 3.956 - 3.985: 99.0588% ( 2) 00:13:07.996 4.015 - 4.044: 99.0885% ( 4) 00:13:07.996 4.044 - 4.073: 99.1033% ( 2) 00:13:07.996 4.073 - 4.102: 99.1181% ( 2) 00:13:07.996 4.160 - 4.189: 99.1255% ( 1) 00:13:07.996 4.335 - 4.364: 99.1329% ( 1) 00:13:07.996 4.393 - 4.422: 99.1404% ( 1) 00:13:07.996 4.509 - 4.538: 99.1478% ( 1) 00:13:07.996 4.538 - 4.567: 99.1552% ( 1) 00:13:07.996 4.596 - 4.625: 99.1626% ( 1) 00:13:07.996 4.742 - 4.771: 99.1774% ( 2) 00:13:07.996 5.062 - 5.091: 99.1848% ( 1) 00:13:07.996 5.120 - 5.149: 99.1922% ( 1) 00:13:07.996 5.149 - 5.178: 99.1996% ( 1) 00:13:07.996 5.353 - 5.382: 99.2071% ( 1) 00:13:07.996 5.585 - 5.615: 99.2219% ( 2) 00:13:07.996 5.702 - 5.731: 99.2293% ( 1) 00:13:07.996 6.604 - 6.633: 99.2367% ( 1) 00:13:07.996 6.720 - 6.749: 99.2441% ( 1) 00:13:07.996 7.796 - 7.855: 99.2515% ( 1) 00:13:07.996 7.855 - 7.913: 99.2589% ( 1) 00:13:07.996 8.029 - 8.087: 99.2663% ( 1) 00:13:07.996 8.204 - 8.262: 99.2738% ( 1) 00:13:07.996 8.262 - 8.320: 99.2812% ( 1) 00:13:07.996 8.378 - 8.436: 99.2886% ( 1) 00:13:07.996 8.669 - 8.727: 99.2960% ( 1) 00:13:07.996 8.785 - 8.844: 99.3034% ( 1) 00:13:07.996 8.960 - 9.018: 99.3108% ( 1) 00:13:07.996 9.135 - 9.193: 99.3182% ( 1) 00:13:07.996 9.193 - 9.251: 99.3330% ( 2) 00:13:07.996 9.658 - 9.716: 99.3404% ( 1) 00:13:07.996 9.716 - 9.775: 99.3479% ( 1) 00:13:07.996 10.065 - 10.124: 99.3553% ( 1) 00:13:07.996 10.415 - 10.473: 99.3627% ( 1) 00:13:07.996 12.276 - 12.335: 99.3701% ( 1) 00:13:07.996 14.255 - 14.313: 99.3775% ( 1) 00:13:07.996 16.407 - 16.524: 99.3849% ( 1) 00:13:07.996 16.524 - 16.640: 99.3997% ( 2) 00:13:07.996 16.989 - 17.105: 99.4071% ( 1) 00:13:07.996 17.222 - 17.338: 99.4146% ( 1) 00:13:07.996 17.338 - 17.455: 99.4220% ( 1) 00:13:07.996 17.571 - 17.687: 99.4368% ( 2) 00:13:07.996 17.687 - 17.804: 99.4516% ( 2) 00:13:07.996 17.920 - 18.036: 99.4590% ( 1) 00:13:07.996 18.385 - 18.502: 99.4664% ( 1) 00:13:07.996 22.458 - 22.575: 99.4738% ( 1) 00:13:07.996 24.436 - 24.553: 99.4813% ( 1) 00:13:07.996 3038.487 - 3053.382: 99.4887% ( 1) 00:13:07.996 3932.160 - 3961.949: 99.5035% ( 2) 00:13:07.996 3961.949 - 3991.738: 99.5257% ( 3) 00:13:07.996 3991.738 - 4021.527: 99.9778% ( 61) 00:13:07.996 4021.527 - 4051.316: 99.9926% ( 2) 00:13:07.996 4051.316 - 4081.105: 100.0000% ( 1) 00:13:07.996 00:13:07.996 18:26:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:07.996 18:26:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:07.996 18:26:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:07.996 18:26:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:07.996 18:26:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:08.254 [ 00:13:08.254 { 00:13:08.254 "allow_any_host": true, 00:13:08.254 "hosts": [], 00:13:08.254 "listen_addresses": [], 00:13:08.254 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:08.254 "subtype": "Discovery" 00:13:08.254 }, 00:13:08.254 { 00:13:08.254 "allow_any_host": true, 00:13:08.254 "hosts": [], 00:13:08.254 "listen_addresses": [ 00:13:08.254 { 00:13:08.254 "adrfam": "IPv4", 00:13:08.254 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:08.254 "trsvcid": "0", 00:13:08.254 "trtype": "VFIOUSER" 00:13:08.254 } 00:13:08.254 ], 00:13:08.254 "max_cntlid": 65519, 00:13:08.254 "max_namespaces": 32, 00:13:08.254 "min_cntlid": 1, 00:13:08.254 "model_number": "SPDK bdev Controller", 00:13:08.254 "namespaces": [ 00:13:08.254 { 00:13:08.254 "bdev_name": "Malloc1", 00:13:08.254 "name": "Malloc1", 00:13:08.254 "nguid": "16AF210AB7344AE8AEE2636949E5FA61", 00:13:08.254 "nsid": 1, 00:13:08.254 "uuid": "16af210a-b734-4ae8-aee2-636949e5fa61" 00:13:08.254 } 00:13:08.254 ], 00:13:08.254 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:08.254 "serial_number": "SPDK1", 00:13:08.254 "subtype": "NVMe" 00:13:08.254 }, 00:13:08.254 { 00:13:08.254 "allow_any_host": true, 00:13:08.254 "hosts": [], 00:13:08.254 "listen_addresses": [ 00:13:08.254 { 00:13:08.254 "adrfam": "IPv4", 00:13:08.254 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:08.254 "trsvcid": "0", 00:13:08.254 "trtype": "VFIOUSER" 00:13:08.254 } 00:13:08.254 ], 00:13:08.254 "max_cntlid": 65519, 00:13:08.254 "max_namespaces": 32, 00:13:08.254 "min_cntlid": 1, 00:13:08.254 "model_number": "SPDK bdev Controller", 00:13:08.254 "namespaces": [ 00:13:08.254 { 00:13:08.254 "bdev_name": "Malloc2", 00:13:08.254 "name": "Malloc2", 00:13:08.254 "nguid": "51167CF181EC4C69872E9D031AD4B011", 00:13:08.254 "nsid": 1, 00:13:08.254 "uuid": "51167cf1-81ec-4c69-872e-9d031ad4b011" 00:13:08.254 } 00:13:08.254 ], 00:13:08.254 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:08.254 "serial_number": "SPDK2", 00:13:08.254 "subtype": "NVMe" 00:13:08.254 } 00:13:08.254 ] 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=76285 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:13:08.254 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=3 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:13:08.512 [2024-05-13 18:26:24.349180] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:08.512 18:26:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:08.770 Malloc3 00:13:09.028 18:26:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:09.286 [2024-05-13 18:26:25.003793] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:09.286 18:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:09.286 Asynchronous Event Request test 00:13:09.286 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:09.286 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:09.286 Registering asynchronous event callbacks... 00:13:09.286 Starting namespace attribute notice tests for all controllers... 00:13:09.286 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:09.286 aer_cb - Changed Namespace 00:13:09.286 Cleaning up... 00:13:09.544 [ 00:13:09.544 { 00:13:09.544 "allow_any_host": true, 00:13:09.544 "hosts": [], 00:13:09.544 "listen_addresses": [], 00:13:09.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:09.544 "subtype": "Discovery" 00:13:09.544 }, 00:13:09.544 { 00:13:09.544 "allow_any_host": true, 00:13:09.544 "hosts": [], 00:13:09.544 "listen_addresses": [ 00:13:09.544 { 00:13:09.544 "adrfam": "IPv4", 00:13:09.544 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:09.544 "trsvcid": "0", 00:13:09.544 "trtype": "VFIOUSER" 00:13:09.544 } 00:13:09.544 ], 00:13:09.544 "max_cntlid": 65519, 00:13:09.544 "max_namespaces": 32, 00:13:09.544 "min_cntlid": 1, 00:13:09.544 "model_number": "SPDK bdev Controller", 00:13:09.544 "namespaces": [ 00:13:09.544 { 00:13:09.544 "bdev_name": "Malloc1", 00:13:09.544 "name": "Malloc1", 00:13:09.544 "nguid": "16AF210AB7344AE8AEE2636949E5FA61", 00:13:09.544 "nsid": 1, 00:13:09.544 "uuid": "16af210a-b734-4ae8-aee2-636949e5fa61" 00:13:09.544 }, 00:13:09.544 { 00:13:09.544 "bdev_name": "Malloc3", 00:13:09.544 "name": "Malloc3", 00:13:09.544 "nguid": "A69B88F880164325BAE5CA272ACF4733", 00:13:09.544 "nsid": 2, 00:13:09.544 "uuid": "a69b88f8-8016-4325-bae5-ca272acf4733" 00:13:09.544 } 00:13:09.544 ], 00:13:09.544 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:09.544 "serial_number": "SPDK1", 00:13:09.544 "subtype": "NVMe" 00:13:09.544 }, 00:13:09.544 { 00:13:09.544 "allow_any_host": true, 00:13:09.544 "hosts": [], 00:13:09.544 "listen_addresses": [ 00:13:09.544 { 00:13:09.544 "adrfam": "IPv4", 00:13:09.544 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:09.544 "trsvcid": "0", 00:13:09.544 "trtype": "VFIOUSER" 00:13:09.544 } 00:13:09.544 ], 00:13:09.544 "max_cntlid": 65519, 00:13:09.544 "max_namespaces": 32, 00:13:09.544 "min_cntlid": 1, 00:13:09.544 "model_number": "SPDK bdev Controller", 00:13:09.544 "namespaces": [ 00:13:09.544 { 00:13:09.544 "bdev_name": "Malloc2", 00:13:09.544 "name": "Malloc2", 00:13:09.544 "nguid": "51167CF181EC4C69872E9D031AD4B011", 00:13:09.544 "nsid": 1, 00:13:09.544 "uuid": "51167cf1-81ec-4c69-872e-9d031ad4b011" 00:13:09.544 } 00:13:09.544 ], 00:13:09.544 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:09.544 "serial_number": "SPDK2", 00:13:09.544 "subtype": "NVMe" 00:13:09.544 } 00:13:09.544 ] 00:13:09.544 18:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 76285 00:13:09.544 18:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:09.544 18:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:09.544 18:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:09.544 18:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:09.544 [2024-05-13 18:26:25.321338] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:09.544 [2024-05-13 18:26:25.321380] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76327 ] 00:13:09.544 [2024-05-13 18:26:25.458814] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:09.544 [2024-05-13 18:26:25.467083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:09.544 [2024-05-13 18:26:25.467122] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff63bd22000 00:13:09.544 [2024-05-13 18:26:25.468083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.469091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.470097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.471104] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.472112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.473122] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.474130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.475133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:09.544 [2024-05-13 18:26:25.476143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:09.544 [2024-05-13 18:26:25.476181] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff63b3b5000 00:13:09.544 [2024-05-13 18:26:25.477440] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:09.804 [2024-05-13 18:26:25.498018] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:09.804 [2024-05-13 18:26:25.498062] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:09.804 [2024-05-13 18:26:25.500150] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:09.804 [2024-05-13 18:26:25.500226] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:09.805 [2024-05-13 18:26:25.500336] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:09.805 [2024-05-13 18:26:25.500361] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:09.805 [2024-05-13 18:26:25.500368] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:09.805 [2024-05-13 18:26:25.501137] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:09.805 [2024-05-13 18:26:25.501163] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:09.805 [2024-05-13 18:26:25.501175] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:09.805 [2024-05-13 18:26:25.502141] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:09.805 [2024-05-13 18:26:25.502166] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:09.805 [2024-05-13 18:26:25.502178] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:09.805 [2024-05-13 18:26:25.503149] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:09.805 [2024-05-13 18:26:25.503175] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:09.805 [2024-05-13 18:26:25.504147] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:09.805 [2024-05-13 18:26:25.504171] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:09.805 [2024-05-13 18:26:25.504179] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:09.805 [2024-05-13 18:26:25.504188] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:09.805 [2024-05-13 18:26:25.504295] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:09.805 [2024-05-13 18:26:25.504300] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:09.805 [2024-05-13 18:26:25.504306] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:09.805 [2024-05-13 18:26:25.505157] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:09.805 [2024-05-13 18:26:25.506160] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:09.805 [2024-05-13 18:26:25.507165] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:09.805 [2024-05-13 18:26:25.508165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.805 [2024-05-13 18:26:25.508249] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:09.805 [2024-05-13 18:26:25.509186] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:09.805 [2024-05-13 18:26:25.509209] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:09.805 [2024-05-13 18:26:25.509216] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.509240] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:09.805 [2024-05-13 18:26:25.509257] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.509274] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:09.805 [2024-05-13 18:26:25.509280] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.805 [2024-05-13 18:26:25.509295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.805 [2024-05-13 18:26:25.515595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:09.805 [2024-05-13 18:26:25.515734] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:09.805 [2024-05-13 18:26:25.515746] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:09.805 [2024-05-13 18:26:25.515751] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:09.805 [2024-05-13 18:26:25.515756] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:09.805 [2024-05-13 18:26:25.515762] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:09.805 [2024-05-13 18:26:25.515768] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:09.805 [2024-05-13 18:26:25.515774] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.515786] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.515812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:09.805 [2024-05-13 18:26:25.522631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:09.805 [2024-05-13 18:26:25.522662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.805 [2024-05-13 18:26:25.522673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.805 [2024-05-13 18:26:25.522682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.805 [2024-05-13 18:26:25.522692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:09.805 [2024-05-13 18:26:25.522698] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.522714] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.522726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:09.805 [2024-05-13 18:26:25.530613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:09.805 [2024-05-13 18:26:25.530632] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:09.805 [2024-05-13 18:26:25.530656] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.530672] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.530680] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.530691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:09.805 [2024-05-13 18:26:25.538644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:09.805 [2024-05-13 18:26:25.538736] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.538751] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.538762] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:09.805 [2024-05-13 18:26:25.538768] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:09.805 [2024-05-13 18:26:25.538776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:09.805 [2024-05-13 18:26:25.546598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:09.805 [2024-05-13 18:26:25.546645] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:09.805 [2024-05-13 18:26:25.546661] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.546672] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.546682] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:09.805 [2024-05-13 18:26:25.546687] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.805 [2024-05-13 18:26:25.546695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.805 [2024-05-13 18:26:25.554588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:09.805 [2024-05-13 18:26:25.554626] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.554639] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.554650] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:09.805 [2024-05-13 18:26:25.554656] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.805 [2024-05-13 18:26:25.554663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.805 [2024-05-13 18:26:25.562587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:09.805 [2024-05-13 18:26:25.562612] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.562623] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.562635] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.562642] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.562648] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:09.805 [2024-05-13 18:26:25.562653] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:09.806 [2024-05-13 18:26:25.562659] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:09.806 [2024-05-13 18:26:25.562665] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:09.806 [2024-05-13 18:26:25.562692] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:09.806 [2024-05-13 18:26:25.570589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:09.806 [2024-05-13 18:26:25.570617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:09.806 [2024-05-13 18:26:25.578589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:09.806 [2024-05-13 18:26:25.578617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:09.806 [2024-05-13 18:26:25.586583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:09.806 [2024-05-13 18:26:25.586622] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:09.806 [2024-05-13 18:26:25.594589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:09.806 [2024-05-13 18:26:25.594621] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:09.806 [2024-05-13 18:26:25.594629] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:09.806 [2024-05-13 18:26:25.594633] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:09.806 [2024-05-13 18:26:25.594637] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:09.806 [2024-05-13 18:26:25.594644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:09.806 [2024-05-13 18:26:25.594654] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:09.806 [2024-05-13 18:26:25.594659] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:09.806 [2024-05-13 18:26:25.594665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:09.806 [2024-05-13 18:26:25.594673] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:09.806 [2024-05-13 18:26:25.594678] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:09.806 [2024-05-13 18:26:25.594684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:09.806 [2024-05-13 18:26:25.594693] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:09.806 [2024-05-13 18:26:25.594698] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:09.806 [2024-05-13 18:26:25.594704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:09.806 [2024-05-13 18:26:25.602588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:09.806 [2024-05-13 18:26:25.602624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:09.806 [2024-05-13 18:26:25.602638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:09.806 [2024-05-13 18:26:25.602649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:09.806 ===================================================== 00:13:09.806 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:09.806 ===================================================== 00:13:09.806 Controller Capabilities/Features 00:13:09.806 ================================ 00:13:09.806 Vendor ID: 4e58 00:13:09.806 Subsystem Vendor ID: 4e58 00:13:09.806 Serial Number: SPDK2 00:13:09.806 Model Number: SPDK bdev Controller 00:13:09.806 Firmware Version: 24.05 00:13:09.806 Recommended Arb Burst: 6 00:13:09.806 IEEE OUI Identifier: 8d 6b 50 00:13:09.806 Multi-path I/O 00:13:09.806 May have multiple subsystem ports: Yes 00:13:09.806 May have multiple controllers: Yes 00:13:09.806 Associated with SR-IOV VF: No 00:13:09.806 Max Data Transfer Size: 131072 00:13:09.806 Max Number of Namespaces: 32 00:13:09.806 Max Number of I/O Queues: 127 00:13:09.806 NVMe Specification Version (VS): 1.3 00:13:09.806 NVMe Specification Version (Identify): 1.3 00:13:09.806 Maximum Queue Entries: 256 00:13:09.806 Contiguous Queues Required: Yes 00:13:09.806 Arbitration Mechanisms Supported 00:13:09.806 Weighted Round Robin: Not Supported 00:13:09.806 Vendor Specific: Not Supported 00:13:09.806 Reset Timeout: 15000 ms 00:13:09.806 Doorbell Stride: 4 bytes 00:13:09.806 NVM Subsystem Reset: Not Supported 00:13:09.806 Command Sets Supported 00:13:09.806 NVM Command Set: Supported 00:13:09.806 Boot Partition: Not Supported 00:13:09.806 Memory Page Size Minimum: 4096 bytes 00:13:09.806 Memory Page Size Maximum: 4096 bytes 00:13:09.806 Persistent Memory Region: Not Supported 00:13:09.806 Optional Asynchronous Events Supported 00:13:09.806 Namespace Attribute Notices: Supported 00:13:09.806 Firmware Activation Notices: Not Supported 00:13:09.806 ANA Change Notices: Not Supported 00:13:09.806 PLE Aggregate Log Change Notices: Not Supported 00:13:09.806 LBA Status Info Alert Notices: Not Supported 00:13:09.806 EGE Aggregate Log Change Notices: Not Supported 00:13:09.806 Normal NVM Subsystem Shutdown event: Not Supported 00:13:09.806 Zone Descriptor Change Notices: Not Supported 00:13:09.806 Discovery Log Change Notices: Not Supported 00:13:09.806 Controller Attributes 00:13:09.806 128-bit Host Identifier: Supported 00:13:09.806 Non-Operational Permissive Mode: Not Supported 00:13:09.806 NVM Sets: Not Supported 00:13:09.806 Read Recovery Levels: Not Supported 00:13:09.806 Endurance Groups: Not Supported 00:13:09.806 Predictable Latency Mode: Not Supported 00:13:09.806 Traffic Based Keep ALive: Not Supported 00:13:09.806 Namespace Granularity: Not Supported 00:13:09.806 SQ Associations: Not Supported 00:13:09.806 UUID List: Not Supported 00:13:09.806 Multi-Domain Subsystem: Not Supported 00:13:09.806 Fixed Capacity Management: Not Supported 00:13:09.806 Variable Capacity Management: Not Supported 00:13:09.806 Delete Endurance Group: Not Supported 00:13:09.806 Delete NVM Set: Not Supported 00:13:09.806 Extended LBA Formats Supported: Not Supported 00:13:09.806 Flexible Data Placement Supported: Not Supported 00:13:09.806 00:13:09.806 Controller Memory Buffer Support 00:13:09.806 ================================ 00:13:09.806 Supported: No 00:13:09.806 00:13:09.806 Persistent Memory Region Support 00:13:09.806 ================================ 00:13:09.806 Supported: No 00:13:09.806 00:13:09.806 Admin Command Set Attributes 00:13:09.806 ============================ 00:13:09.806 Security Send/Receive: Not Supported 00:13:09.806 Format NVM: Not Supported 00:13:09.806 Firmware Activate/Download: Not Supported 00:13:09.806 Namespace Management: Not Supported 00:13:09.806 Device Self-Test: Not Supported 00:13:09.806 Directives: Not Supported 00:13:09.806 NVMe-MI: Not Supported 00:13:09.806 Virtualization Management: Not Supported 00:13:09.806 Doorbell Buffer Config: Not Supported 00:13:09.806 Get LBA Status Capability: Not Supported 00:13:09.806 Command & Feature Lockdown Capability: Not Supported 00:13:09.806 Abort Command Limit: 4 00:13:09.806 Async Event Request Limit: 4 00:13:09.806 Number of Firmware Slots: N/A 00:13:09.806 Firmware Slot 1 Read-Only: N/A 00:13:09.806 Firmware Activation Without Reset: N/A 00:13:09.806 Multiple Update Detection Support: N/A 00:13:09.806 Firmware Update Granularity: No Information Provided 00:13:09.806 Per-Namespace SMART Log: No 00:13:09.806 Asymmetric Namespace Access Log Page: Not Supported 00:13:09.806 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:09.806 Command Effects Log Page: Supported 00:13:09.806 Get Log Page Extended Data: Supported 00:13:09.806 Telemetry Log Pages: Not Supported 00:13:09.806 Persistent Event Log Pages: Not Supported 00:13:09.806 Supported Log Pages Log Page: May Support 00:13:09.806 Commands Supported & Effects Log Page: Not Supported 00:13:09.806 Feature Identifiers & Effects Log Page:May Support 00:13:09.806 NVMe-MI Commands & Effects Log Page: May Support 00:13:09.806 Data Area 4 for Telemetry Log: Not Supported 00:13:09.806 Error Log Page Entries Supported: 128 00:13:09.807 Keep Alive: Supported 00:13:09.807 Keep Alive Granularity: 10000 ms 00:13:09.807 00:13:09.807 NVM Command Set Attributes 00:13:09.807 ========================== 00:13:09.807 Submission Queue Entry Size 00:13:09.807 Max: 64 00:13:09.807 Min: 64 00:13:09.807 Completion Queue Entry Size 00:13:09.807 Max: 16 00:13:09.807 Min: 16 00:13:09.807 Number of Namespaces: 32 00:13:09.807 Compare Command: Supported 00:13:09.807 Write Uncorrectable Command: Not Supported 00:13:09.807 Dataset Management Command: Supported 00:13:09.807 Write Zeroes Command: Supported 00:13:09.807 Set Features Save Field: Not Supported 00:13:09.807 Reservations: Not Supported 00:13:09.807 Timestamp: Not Supported 00:13:09.807 Copy: Supported 00:13:09.807 Volatile Write Cache: Present 00:13:09.807 Atomic Write Unit (Normal): 1 00:13:09.807 Atomic Write Unit (PFail): 1 00:13:09.807 Atomic Compare & Write Unit: 1 00:13:09.807 Fused Compare & Write: Supported 00:13:09.807 Scatter-Gather List 00:13:09.807 SGL Command Set: Supported (Dword aligned) 00:13:09.807 SGL Keyed: Not Supported 00:13:09.807 SGL Bit Bucket Descriptor: Not Supported 00:13:09.807 SGL Metadata Pointer: Not Supported 00:13:09.807 Oversized SGL: Not Supported 00:13:09.807 SGL Metadata Address: Not Supported 00:13:09.807 SGL Offset: Not Supported 00:13:09.807 Transport SGL Data Block: Not Supported 00:13:09.807 Replay Protected Memory Block: Not Supported 00:13:09.807 00:13:09.807 Firmware Slot Information 00:13:09.807 ========================= 00:13:09.807 Active slot: 1 00:13:09.807 Slot 1 Firmware Revision: 24.05 00:13:09.807 00:13:09.807 00:13:09.807 Commands Supported and Effects 00:13:09.807 ============================== 00:13:09.807 Admin Commands 00:13:09.807 -------------- 00:13:09.807 Get Log Page (02h): Supported 00:13:09.807 Identify (06h): Supported 00:13:09.807 Abort (08h): Supported 00:13:09.807 Set Features (09h): Supported 00:13:09.807 Get Features (0Ah): Supported 00:13:09.807 Asynchronous Event Request (0Ch): Supported 00:13:09.807 Keep Alive (18h): Supported 00:13:09.807 I/O Commands 00:13:09.807 ------------ 00:13:09.807 Flush (00h): Supported LBA-Change 00:13:09.807 Write (01h): Supported LBA-Change 00:13:09.807 Read (02h): Supported 00:13:09.807 Compare (05h): Supported 00:13:09.807 Write Zeroes (08h): Supported LBA-Change 00:13:09.807 Dataset Management (09h): Supported LBA-Change 00:13:09.807 Copy (19h): Supported LBA-Change 00:13:09.807 Unknown (79h): Supported LBA-Change 00:13:09.807 Unknown (7Ah): Supported 00:13:09.807 00:13:09.807 Error Log 00:13:09.807 ========= 00:13:09.807 00:13:09.807 Arbitration 00:13:09.807 =========== 00:13:09.807 Arbitration Burst: 1 00:13:09.807 00:13:09.807 Power Management 00:13:09.807 ================ 00:13:09.807 Number of Power States: 1 00:13:09.807 Current Power State: Power State #0 00:13:09.807 Power State #0: 00:13:09.807 Max Power: 0.00 W 00:13:09.807 Non-Operational State: Operational 00:13:09.807 Entry Latency: Not Reported 00:13:09.807 Exit Latency: Not Reported 00:13:09.807 Relative Read Throughput: 0 00:13:09.807 Relative Read Latency: 0 00:13:09.807 Relative Write Throughput: 0 00:13:09.807 Relative Write Latency: 0 00:13:09.807 Idle Power: Not Reported 00:13:09.807 Active Power: Not Reported 00:13:09.807 Non-Operational Permissive Mode: Not Supported 00:13:09.807 00:13:09.807 Health Information 00:13:09.807 ================== 00:13:09.807 Critical Warnings: 00:13:09.807 Available Spare Space: OK 00:13:09.807 Temperature: OK 00:13:09.807 Device Reliability: OK 00:13:09.807 Read Only: No 00:13:09.807 Volatile Memory Backup: OK 00:13:09.807 Current Temperature: 0 Kelvin (-2[2024-05-13 18:26:25.602767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:09.807 [2024-05-13 18:26:25.610584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:09.807 [2024-05-13 18:26:25.610643] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:09.807 [2024-05-13 18:26:25.610659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.807 [2024-05-13 18:26:25.610667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.807 [2024-05-13 18:26:25.610674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.807 [2024-05-13 18:26:25.610682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:09.807 [2024-05-13 18:26:25.610782] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:09.807 [2024-05-13 18:26:25.610801] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:09.807 [2024-05-13 18:26:25.611785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.807 [2024-05-13 18:26:25.611879] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:09.807 [2024-05-13 18:26:25.611891] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:09.807 [2024-05-13 18:26:25.612797] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:09.807 [2024-05-13 18:26:25.612824] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:09.807 [2024-05-13 18:26:25.613015] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:09.807 [2024-05-13 18:26:25.619587] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 9, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:09.807 73 Celsius) 00:13:09.807 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:09.807 Available Spare: 0% 00:13:09.807 Available Spare Threshold: 0% 00:13:09.807 Life Percentage Used: 0% 00:13:09.807 Data Units Read: 0 00:13:09.807 Data Units Written: 0 00:13:09.807 Host Read Commands: 0 00:13:09.807 Host Write Commands: 0 00:13:09.807 Controller Busy Time: 0 minutes 00:13:09.807 Power Cycles: 0 00:13:09.807 Power On Hours: 0 hours 00:13:09.807 Unsafe Shutdowns: 0 00:13:09.807 Unrecoverable Media Errors: 0 00:13:09.807 Lifetime Error Log Entries: 0 00:13:09.807 Warning Temperature Time: 0 minutes 00:13:09.807 Critical Temperature Time: 0 minutes 00:13:09.807 00:13:09.807 Number of Queues 00:13:09.807 ================ 00:13:09.807 Number of I/O Submission Queues: 127 00:13:09.807 Number of I/O Completion Queues: 127 00:13:09.807 00:13:09.807 Active Namespaces 00:13:09.807 ================= 00:13:09.807 Namespace ID:1 00:13:09.807 Error Recovery Timeout: Unlimited 00:13:09.807 Command Set Identifier: NVM (00h) 00:13:09.807 Deallocate: Supported 00:13:09.807 Deallocated/Unwritten Error: Not Supported 00:13:09.807 Deallocated Read Value: Unknown 00:13:09.807 Deallocate in Write Zeroes: Not Supported 00:13:09.807 Deallocated Guard Field: 0xFFFF 00:13:09.807 Flush: Supported 00:13:09.807 Reservation: Supported 00:13:09.807 Namespace Sharing Capabilities: Multiple Controllers 00:13:09.807 Size (in LBAs): 131072 (0GiB) 00:13:09.807 Capacity (in LBAs): 131072 (0GiB) 00:13:09.807 Utilization (in LBAs): 131072 (0GiB) 00:13:09.807 NGUID: 51167CF181EC4C69872E9D031AD4B011 00:13:09.807 UUID: 51167cf1-81ec-4c69-872e-9d031ad4b011 00:13:09.807 Thin Provisioning: Not Supported 00:13:09.807 Per-NS Atomic Units: Yes 00:13:09.807 Atomic Boundary Size (Normal): 0 00:13:09.807 Atomic Boundary Size (PFail): 0 00:13:09.807 Atomic Boundary Offset: 0 00:13:09.807 Maximum Single Source Range Length: 65535 00:13:09.807 Maximum Copy Length: 65535 00:13:09.807 Maximum Source Range Count: 1 00:13:09.807 NGUID/EUI64 Never Reused: No 00:13:09.807 Namespace Write Protected: No 00:13:09.807 Number of LBA Formats: 1 00:13:09.807 Current LBA Format: LBA Format #00 00:13:09.807 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:09.807 00:13:09.808 18:26:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:10.100 [2024-05-13 18:26:25.951057] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:15.362 Initializing NVMe Controllers 00:13:15.362 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:15.362 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:15.362 Initialization complete. Launching workers. 00:13:15.362 ======================================================== 00:13:15.362 Latency(us) 00:13:15.362 Device Information : IOPS MiB/s Average min max 00:13:15.362 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35186.81 137.45 3637.06 1144.62 9584.85 00:13:15.362 ======================================================== 00:13:15.362 Total : 35186.81 137.45 3637.06 1144.62 9584.85 00:13:15.362 00:13:15.362 [2024-05-13 18:26:31.044046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:15.362 18:26:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:15.620 [2024-05-13 18:26:31.370768] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:20.910 Initializing NVMe Controllers 00:13:20.910 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:20.910 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:20.910 Initialization complete. Launching workers. 00:13:20.910 ======================================================== 00:13:20.910 Latency(us) 00:13:20.911 Device Information : IOPS MiB/s Average min max 00:13:20.911 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35578.96 138.98 3597.07 1187.62 9697.80 00:13:20.911 ======================================================== 00:13:20.911 Total : 35578.96 138.98 3597.07 1187.62 9697.80 00:13:20.911 00:13:20.911 [2024-05-13 18:26:36.379250] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:20.911 18:26:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /home/vagrant/spdk_repo/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:20.911 [2024-05-13 18:26:36.640655] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:26.176 [2024-05-13 18:26:41.759949] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:26.176 Initializing NVMe Controllers 00:13:26.176 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:26.176 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:26.176 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:26.176 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:26.176 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:26.176 Initialization complete. Launching workers. 00:13:26.176 Starting thread on core 2 00:13:26.176 Starting thread on core 3 00:13:26.176 Starting thread on core 1 00:13:26.176 18:26:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:26.176 [2024-05-13 18:26:42.101425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.358 [2024-05-13 18:26:45.897827] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.358 Initializing NVMe Controllers 00:13:30.358 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.358 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.358 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:30.358 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:30.358 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:30.358 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:30.358 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:13:30.358 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:30.358 Initialization complete. Launching workers. 00:13:30.358 Starting thread on core 1 with urgent priority queue 00:13:30.358 Starting thread on core 2 with urgent priority queue 00:13:30.358 Starting thread on core 3 with urgent priority queue 00:13:30.358 Starting thread on core 0 with urgent priority queue 00:13:30.358 SPDK bdev Controller (SPDK2 ) core 0: 6312.67 IO/s 15.84 secs/100000 ios 00:13:30.358 SPDK bdev Controller (SPDK2 ) core 1: 6864.00 IO/s 14.57 secs/100000 ios 00:13:30.358 SPDK bdev Controller (SPDK2 ) core 2: 6457.67 IO/s 15.49 secs/100000 ios 00:13:30.358 SPDK bdev Controller (SPDK2 ) core 3: 7702.00 IO/s 12.98 secs/100000 ios 00:13:30.358 ======================================================== 00:13:30.358 00:13:30.358 18:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:30.358 [2024-05-13 18:26:46.244682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.358 Initializing NVMe Controllers 00:13:30.358 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.358 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.358 Namespace ID: 1 size: 0GB 00:13:30.358 Initialization complete. 00:13:30.358 INFO: using host memory buffer for IO 00:13:30.358 Hello world! 00:13:30.358 [2024-05-13 18:26:46.254735] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.616 18:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:30.874 [2024-05-13 18:26:46.608794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:31.830 Initializing NVMe Controllers 00:13:31.830 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:31.830 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:31.830 Initialization complete. Launching workers. 00:13:31.830 submit (in ns) avg, min, max = 8431.5, 3612.3, 4029264.1 00:13:31.830 complete (in ns) avg, min, max = 22591.0, 2150.5, 4032457.3 00:13:31.830 00:13:31.830 Submit histogram 00:13:31.830 ================ 00:13:31.830 Range in us Cumulative Count 00:13:31.830 3.607 - 3.622: 0.0070% ( 1) 00:13:31.830 3.622 - 3.636: 0.0563% ( 7) 00:13:31.830 3.636 - 3.651: 0.1548% ( 14) 00:13:31.830 3.651 - 3.665: 0.3306% ( 25) 00:13:31.830 3.665 - 3.680: 0.4502% ( 17) 00:13:31.830 3.680 - 3.695: 0.6050% ( 22) 00:13:31.830 3.695 - 3.709: 0.8301% ( 32) 00:13:31.830 3.709 - 3.724: 1.0693% ( 34) 00:13:31.830 3.724 - 3.753: 3.4611% ( 340) 00:13:31.830 3.753 - 3.782: 13.3099% ( 1400) 00:13:31.830 3.782 - 3.811: 26.5846% ( 1887) 00:13:31.830 3.811 - 3.840: 44.0872% ( 2488) 00:13:31.830 3.840 - 3.869: 60.5065% ( 2334) 00:13:31.830 3.869 - 3.898: 72.4446% ( 1697) 00:13:31.830 3.898 - 3.927: 78.7900% ( 902) 00:13:31.830 3.927 - 3.956: 82.4833% ( 525) 00:13:31.830 3.956 - 3.985: 84.9666% ( 353) 00:13:31.830 3.985 - 4.015: 86.8449% ( 267) 00:13:31.830 4.015 - 4.044: 88.1815% ( 190) 00:13:31.830 4.044 - 4.073: 89.8628% ( 239) 00:13:31.830 4.073 - 4.102: 91.9662% ( 299) 00:13:31.830 4.102 - 4.131: 94.0485% ( 296) 00:13:31.830 4.131 - 4.160: 95.5047% ( 207) 00:13:31.830 4.160 - 4.189: 96.3067% ( 114) 00:13:31.830 4.189 - 4.218: 96.7992% ( 70) 00:13:31.830 4.218 - 4.247: 97.1509% ( 50) 00:13:31.831 4.247 - 4.276: 97.3338% ( 26) 00:13:31.831 4.276 - 4.305: 97.5519% ( 31) 00:13:31.831 4.305 - 4.335: 97.7770% ( 32) 00:13:31.831 4.335 - 4.364: 97.9669% ( 27) 00:13:31.831 4.364 - 4.393: 98.0654% ( 14) 00:13:31.831 4.393 - 4.422: 98.1639% ( 14) 00:13:31.831 4.422 - 4.451: 98.2272% ( 9) 00:13:31.831 4.451 - 4.480: 98.2765% ( 7) 00:13:31.831 4.480 - 4.509: 98.3539% ( 11) 00:13:31.831 4.509 - 4.538: 98.3890% ( 5) 00:13:31.831 4.538 - 4.567: 98.4664% ( 11) 00:13:31.831 4.567 - 4.596: 98.5649% ( 14) 00:13:31.831 4.596 - 4.625: 98.6352% ( 10) 00:13:31.831 4.625 - 4.655: 98.7056% ( 10) 00:13:31.831 4.655 - 4.684: 98.7408% ( 5) 00:13:31.831 4.684 - 4.713: 98.7830% ( 6) 00:13:31.831 4.713 - 4.742: 98.8181% ( 5) 00:13:31.831 4.742 - 4.771: 98.8885% ( 10) 00:13:31.831 4.771 - 4.800: 98.9588% ( 10) 00:13:31.831 4.800 - 4.829: 98.9800% ( 3) 00:13:31.831 4.829 - 4.858: 99.0151% ( 5) 00:13:31.831 4.858 - 4.887: 99.0222% ( 1) 00:13:31.831 4.887 - 4.916: 99.0714% ( 7) 00:13:31.831 4.945 - 4.975: 99.1066% ( 5) 00:13:31.831 4.975 - 5.004: 99.1277% ( 3) 00:13:31.831 5.004 - 5.033: 99.1347% ( 1) 00:13:31.831 5.062 - 5.091: 99.1418% ( 1) 00:13:31.831 5.091 - 5.120: 99.1558% ( 2) 00:13:31.831 5.149 - 5.178: 99.1629% ( 1) 00:13:31.831 5.178 - 5.207: 99.1910% ( 4) 00:13:31.831 5.207 - 5.236: 99.1980% ( 1) 00:13:31.831 5.265 - 5.295: 99.2051% ( 1) 00:13:31.831 5.295 - 5.324: 99.2262% ( 3) 00:13:31.831 5.382 - 5.411: 99.2332% ( 1) 00:13:31.831 5.411 - 5.440: 99.2473% ( 2) 00:13:31.831 5.440 - 5.469: 99.2543% ( 1) 00:13:31.831 5.469 - 5.498: 99.2684% ( 2) 00:13:31.831 5.498 - 5.527: 99.2824% ( 2) 00:13:31.831 5.527 - 5.556: 99.2895% ( 1) 00:13:31.831 5.556 - 5.585: 99.3106% ( 3) 00:13:31.831 5.585 - 5.615: 99.3247% ( 2) 00:13:31.831 5.615 - 5.644: 99.3317% ( 1) 00:13:31.831 5.702 - 5.731: 99.3387% ( 1) 00:13:31.831 5.789 - 5.818: 99.3458% ( 1) 00:13:31.831 5.905 - 5.935: 99.3528% ( 1) 00:13:31.831 5.964 - 5.993: 99.3598% ( 1) 00:13:31.831 5.993 - 6.022: 99.3739% ( 2) 00:13:31.831 6.022 - 6.051: 99.3809% ( 1) 00:13:31.831 6.167 - 6.196: 99.3880% ( 1) 00:13:31.831 6.313 - 6.342: 99.3950% ( 1) 00:13:31.831 6.691 - 6.720: 99.4020% ( 1) 00:13:31.831 7.040 - 7.069: 99.4231% ( 3) 00:13:31.831 7.505 - 7.564: 99.4302% ( 1) 00:13:31.831 8.669 - 8.727: 99.4372% ( 1) 00:13:31.831 9.076 - 9.135: 99.4513% ( 2) 00:13:31.831 9.135 - 9.193: 99.4654% ( 2) 00:13:31.831 9.193 - 9.251: 99.4724% ( 1) 00:13:31.831 9.251 - 9.309: 99.4794% ( 1) 00:13:31.831 9.309 - 9.367: 99.4865% ( 1) 00:13:31.831 9.367 - 9.425: 99.5005% ( 2) 00:13:31.831 9.425 - 9.484: 99.5357% ( 5) 00:13:31.831 9.484 - 9.542: 99.5498% ( 2) 00:13:31.831 9.542 - 9.600: 99.5568% ( 1) 00:13:31.831 9.600 - 9.658: 99.5638% ( 1) 00:13:31.831 9.658 - 9.716: 99.5849% ( 3) 00:13:31.831 9.716 - 9.775: 99.6131% ( 4) 00:13:31.831 9.891 - 9.949: 99.6201% ( 1) 00:13:31.831 9.949 - 10.007: 99.6412% ( 3) 00:13:31.831 10.007 - 10.065: 99.6623% ( 3) 00:13:31.831 10.065 - 10.124: 99.6694% ( 1) 00:13:31.831 10.124 - 10.182: 99.6834% ( 2) 00:13:31.831 10.240 - 10.298: 99.7116% ( 4) 00:13:31.831 10.298 - 10.356: 99.7256% ( 2) 00:13:31.831 10.415 - 10.473: 99.7327% ( 1) 00:13:31.831 10.473 - 10.531: 99.7538% ( 3) 00:13:31.831 10.589 - 10.647: 99.7679% ( 2) 00:13:31.831 10.647 - 10.705: 99.7749% ( 1) 00:13:31.831 10.764 - 10.822: 99.7819% ( 1) 00:13:31.831 10.822 - 10.880: 99.7890% ( 1) 00:13:31.831 10.938 - 10.996: 99.7960% ( 1) 00:13:31.831 11.113 - 11.171: 99.8030% ( 1) 00:13:31.831 11.171 - 11.229: 99.8101% ( 1) 00:13:31.831 11.520 - 11.578: 99.8171% ( 1) 00:13:31.831 11.753 - 11.811: 99.8241% ( 1) 00:13:31.831 11.869 - 11.927: 99.8312% ( 1) 00:13:31.831 12.276 - 12.335: 99.8382% ( 1) 00:13:31.831 13.440 - 13.498: 99.8452% ( 1) 00:13:31.831 14.022 - 14.080: 99.8523% ( 1) 00:13:31.831 14.138 - 14.196: 99.8593% ( 1) 00:13:31.831 14.895 - 15.011: 99.8663% ( 1) 00:13:31.831 15.709 - 15.825: 99.8734% ( 1) 00:13:31.831 18.502 - 18.618: 99.8804% ( 1) 00:13:31.831 18.967 - 19.084: 99.8874% ( 1) 00:13:31.831 3932.160 - 3961.949: 99.8945% ( 1) 00:13:31.831 3991.738 - 4021.527: 99.9930% ( 14) 00:13:31.831 4021.527 - 4051.316: 100.0000% ( 1) 00:13:31.831 00:13:31.831 Complete histogram 00:13:31.831 ================== 00:13:31.831 Range in us Cumulative Count 00:13:31.831 2.138 - 2.153: 0.0070% ( 1) 00:13:31.831 2.153 - 2.167: 1.0623% ( 150) 00:13:31.831 2.167 - 2.182: 1.5477% ( 69) 00:13:31.831 2.182 - 2.196: 1.5688% ( 3) 00:13:31.831 2.196 - 2.211: 1.5899% ( 3) 00:13:31.831 2.211 - 2.225: 1.7376% ( 21) 00:13:31.831 2.225 - 2.240: 31.1994% ( 4188) 00:13:31.831 2.240 - 2.255: 87.0911% ( 7945) 00:13:31.831 2.255 - 2.269: 90.5171% ( 487) 00:13:31.831 2.269 - 2.284: 91.5090% ( 141) 00:13:31.831 2.284 - 2.298: 93.8234% ( 329) 00:13:31.831 2.298 - 2.313: 96.7429% ( 415) 00:13:31.831 2.313 - 2.327: 97.4112% ( 95) 00:13:31.831 2.327 - 2.342: 97.9036% ( 70) 00:13:31.831 2.342 - 2.356: 98.1780% ( 39) 00:13:31.831 2.356 - 2.371: 98.4101% ( 33) 00:13:31.831 2.371 - 2.385: 98.5790% ( 24) 00:13:31.831 2.385 - 2.400: 98.6493% ( 10) 00:13:31.831 2.400 - 2.415: 98.7126% ( 9) 00:13:31.831 2.415 - 2.429: 98.7408% ( 4) 00:13:31.831 2.429 - 2.444: 98.7830% ( 6) 00:13:31.831 2.444 - 2.458: 98.7900% ( 1) 00:13:31.831 2.458 - 2.473: 98.7970% ( 1) 00:13:31.831 2.487 - 2.502: 98.8252% ( 4) 00:13:31.831 2.502 - 2.516: 98.8604% ( 5) 00:13:31.831 2.516 - 2.531: 98.8815% ( 3) 00:13:31.831 2.531 - 2.545: 98.9026% ( 3) 00:13:31.831 2.545 - 2.560: 98.9237% ( 3) 00:13:31.831 2.560 - 2.575: 98.9377% ( 2) 00:13:31.831 2.575 - 2.589: 98.9588% ( 3) 00:13:31.831 2.589 - 2.604: 98.9800% ( 3) 00:13:31.831 2.604 - 2.618: 98.9940% ( 2) 00:13:31.831 2.662 - 2.676: 99.0011% ( 1) 00:13:31.831 2.720 - 2.735: 99.0081% ( 1) 00:13:31.831 2.735 - 2.749: 99.0222% ( 2) 00:13:31.831 2.822 - 2.836: 99.0292% ( 1) 00:13:31.831 2.865 - 2.880: 99.0362% ( 1) 00:13:31.831 2.880 - 2.895: 99.0433% ( 1) 00:13:31.831 2.938 - 2.953: 99.0503% ( 1) 00:13:31.831 2.982 - 2.996: 99.0573% ( 1) 00:13:31.831 3.505 - 3.520: 99.0714% ( 2) 00:13:31.831 3.549 - 3.564: 99.0784% ( 1) 00:13:31.831 3.593 - 3.6[2024-05-13 18:26:47.709681] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:31.831 07: 99.0855% ( 1) 00:13:31.831 3.665 - 3.680: 99.0925% ( 1) 00:13:31.831 3.724 - 3.753: 99.1066% ( 2) 00:13:31.831 3.753 - 3.782: 99.1136% ( 1) 00:13:31.831 3.840 - 3.869: 99.1206% ( 1) 00:13:31.831 3.869 - 3.898: 99.1277% ( 1) 00:13:31.831 3.898 - 3.927: 99.1347% ( 1) 00:13:31.831 3.927 - 3.956: 99.1418% ( 1) 00:13:31.831 4.015 - 4.044: 99.1488% ( 1) 00:13:31.831 4.044 - 4.073: 99.1558% ( 1) 00:13:31.831 4.160 - 4.189: 99.1629% ( 1) 00:13:31.831 4.189 - 4.218: 99.1699% ( 1) 00:13:31.831 4.276 - 4.305: 99.1769% ( 1) 00:13:31.831 4.305 - 4.335: 99.1980% ( 3) 00:13:31.831 4.422 - 4.451: 99.2051% ( 1) 00:13:31.831 4.451 - 4.480: 99.2121% ( 1) 00:13:31.831 4.596 - 4.625: 99.2191% ( 1) 00:13:31.831 4.684 - 4.713: 99.2262% ( 1) 00:13:31.831 4.771 - 4.800: 99.2402% ( 2) 00:13:31.831 4.945 - 4.975: 99.2473% ( 1) 00:13:31.831 5.324 - 5.353: 99.2543% ( 1) 00:13:31.831 5.469 - 5.498: 99.2613% ( 1) 00:13:31.831 5.818 - 5.847: 99.2684% ( 1) 00:13:31.831 6.138 - 6.167: 99.2754% ( 1) 00:13:31.831 6.604 - 6.633: 99.2824% ( 1) 00:13:31.831 7.011 - 7.040: 99.2895% ( 1) 00:13:31.831 7.069 - 7.098: 99.2965% ( 1) 00:13:31.831 7.185 - 7.215: 99.3036% ( 1) 00:13:31.831 7.738 - 7.796: 99.3106% ( 1) 00:13:31.831 7.913 - 7.971: 99.3176% ( 1) 00:13:31.831 8.029 - 8.087: 99.3247% ( 1) 00:13:31.831 8.378 - 8.436: 99.3317% ( 1) 00:13:31.831 8.495 - 8.553: 99.3458% ( 2) 00:13:31.831 8.553 - 8.611: 99.3528% ( 1) 00:13:31.831 8.611 - 8.669: 99.3598% ( 1) 00:13:31.831 8.844 - 8.902: 99.3669% ( 1) 00:13:31.831 8.902 - 8.960: 99.3809% ( 2) 00:13:31.831 9.193 - 9.251: 99.3880% ( 1) 00:13:31.831 9.309 - 9.367: 99.3950% ( 1) 00:13:31.831 9.425 - 9.484: 99.4091% ( 2) 00:13:31.831 9.658 - 9.716: 99.4161% ( 1) 00:13:31.831 9.716 - 9.775: 99.4302% ( 2) 00:13:31.831 10.065 - 10.124: 99.4372% ( 1) 00:13:31.831 10.298 - 10.356: 99.4442% ( 1) 00:13:31.831 11.404 - 11.462: 99.4513% ( 1) 00:13:31.831 12.044 - 12.102: 99.4583% ( 1) 00:13:31.831 12.218 - 12.276: 99.4654% ( 1) 00:13:31.831 12.684 - 12.742: 99.4724% ( 1) 00:13:31.831 15.709 - 15.825: 99.4794% ( 1) 00:13:31.831 18.153 - 18.269: 99.4935% ( 2) 00:13:31.831 3991.738 - 4021.527: 99.9719% ( 68) 00:13:31.831 4021.527 - 4051.316: 100.0000% ( 4) 00:13:31.831 00:13:31.832 18:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:31.832 18:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:31.832 18:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:31.832 18:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:31.832 18:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:32.089 [ 00:13:32.089 { 00:13:32.089 "allow_any_host": true, 00:13:32.089 "hosts": [], 00:13:32.089 "listen_addresses": [], 00:13:32.089 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:32.089 "subtype": "Discovery" 00:13:32.089 }, 00:13:32.089 { 00:13:32.089 "allow_any_host": true, 00:13:32.089 "hosts": [], 00:13:32.089 "listen_addresses": [ 00:13:32.089 { 00:13:32.089 "adrfam": "IPv4", 00:13:32.089 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:32.089 "trsvcid": "0", 00:13:32.089 "trtype": "VFIOUSER" 00:13:32.089 } 00:13:32.089 ], 00:13:32.089 "max_cntlid": 65519, 00:13:32.089 "max_namespaces": 32, 00:13:32.089 "min_cntlid": 1, 00:13:32.089 "model_number": "SPDK bdev Controller", 00:13:32.089 "namespaces": [ 00:13:32.089 { 00:13:32.089 "bdev_name": "Malloc1", 00:13:32.089 "name": "Malloc1", 00:13:32.089 "nguid": "16AF210AB7344AE8AEE2636949E5FA61", 00:13:32.089 "nsid": 1, 00:13:32.089 "uuid": "16af210a-b734-4ae8-aee2-636949e5fa61" 00:13:32.089 }, 00:13:32.089 { 00:13:32.089 "bdev_name": "Malloc3", 00:13:32.089 "name": "Malloc3", 00:13:32.089 "nguid": "A69B88F880164325BAE5CA272ACF4733", 00:13:32.089 "nsid": 2, 00:13:32.089 "uuid": "a69b88f8-8016-4325-bae5-ca272acf4733" 00:13:32.089 } 00:13:32.089 ], 00:13:32.089 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:32.089 "serial_number": "SPDK1", 00:13:32.089 "subtype": "NVMe" 00:13:32.089 }, 00:13:32.089 { 00:13:32.089 "allow_any_host": true, 00:13:32.089 "hosts": [], 00:13:32.089 "listen_addresses": [ 00:13:32.089 { 00:13:32.089 "adrfam": "IPv4", 00:13:32.089 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:32.089 "trsvcid": "0", 00:13:32.089 "trtype": "VFIOUSER" 00:13:32.089 } 00:13:32.089 ], 00:13:32.089 "max_cntlid": 65519, 00:13:32.089 "max_namespaces": 32, 00:13:32.089 "min_cntlid": 1, 00:13:32.089 "model_number": "SPDK bdev Controller", 00:13:32.089 "namespaces": [ 00:13:32.089 { 00:13:32.089 "bdev_name": "Malloc2", 00:13:32.089 "name": "Malloc2", 00:13:32.089 "nguid": "51167CF181EC4C69872E9D031AD4B011", 00:13:32.089 "nsid": 1, 00:13:32.089 "uuid": "51167cf1-81ec-4c69-872e-9d031ad4b011" 00:13:32.089 } 00:13:32.089 ], 00:13:32.089 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:32.089 "serial_number": "SPDK2", 00:13:32.089 "subtype": "NVMe" 00:13:32.089 } 00:13:32.089 ] 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=76584 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=1 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=2 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # '[' 2 -lt 200 ']' 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # i=3 00:13:32.348 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # sleep 0.1 00:13:32.348 [2024-05-13 18:26:48.266959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:32.606 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.606 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.606 18:26:48 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:13:32.606 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:32.606 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:32.864 Malloc4 00:13:32.864 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:33.123 [2024-05-13 18:26:48.973643] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:33.123 18:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:33.123 Asynchronous Event Request test 00:13:33.123 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:33.123 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:33.123 Registering asynchronous event callbacks... 00:13:33.123 Starting namespace attribute notice tests for all controllers... 00:13:33.123 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:33.123 aer_cb - Changed Namespace 00:13:33.123 Cleaning up... 00:13:33.381 [ 00:13:33.381 { 00:13:33.381 "allow_any_host": true, 00:13:33.381 "hosts": [], 00:13:33.381 "listen_addresses": [], 00:13:33.381 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.381 "subtype": "Discovery" 00:13:33.381 }, 00:13:33.381 { 00:13:33.381 "allow_any_host": true, 00:13:33.381 "hosts": [], 00:13:33.381 "listen_addresses": [ 00:13:33.381 { 00:13:33.381 "adrfam": "IPv4", 00:13:33.381 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:33.381 "trsvcid": "0", 00:13:33.381 "trtype": "VFIOUSER" 00:13:33.381 } 00:13:33.381 ], 00:13:33.381 "max_cntlid": 65519, 00:13:33.381 "max_namespaces": 32, 00:13:33.381 "min_cntlid": 1, 00:13:33.381 "model_number": "SPDK bdev Controller", 00:13:33.381 "namespaces": [ 00:13:33.381 { 00:13:33.381 "bdev_name": "Malloc1", 00:13:33.381 "name": "Malloc1", 00:13:33.381 "nguid": "16AF210AB7344AE8AEE2636949E5FA61", 00:13:33.381 "nsid": 1, 00:13:33.381 "uuid": "16af210a-b734-4ae8-aee2-636949e5fa61" 00:13:33.381 }, 00:13:33.381 { 00:13:33.381 "bdev_name": "Malloc3", 00:13:33.381 "name": "Malloc3", 00:13:33.381 "nguid": "A69B88F880164325BAE5CA272ACF4733", 00:13:33.381 "nsid": 2, 00:13:33.381 "uuid": "a69b88f8-8016-4325-bae5-ca272acf4733" 00:13:33.381 } 00:13:33.381 ], 00:13:33.381 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:33.381 "serial_number": "SPDK1", 00:13:33.381 "subtype": "NVMe" 00:13:33.381 }, 00:13:33.381 { 00:13:33.381 "allow_any_host": true, 00:13:33.381 "hosts": [], 00:13:33.381 "listen_addresses": [ 00:13:33.381 { 00:13:33.381 "adrfam": "IPv4", 00:13:33.381 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:33.381 "trsvcid": "0", 00:13:33.381 "trtype": "VFIOUSER" 00:13:33.381 } 00:13:33.381 ], 00:13:33.381 "max_cntlid": 65519, 00:13:33.381 "max_namespaces": 32, 00:13:33.381 "min_cntlid": 1, 00:13:33.381 "model_number": "SPDK bdev Controller", 00:13:33.381 "namespaces": [ 00:13:33.381 { 00:13:33.381 "bdev_name": "Malloc2", 00:13:33.381 "name": "Malloc2", 00:13:33.381 "nguid": "51167CF181EC4C69872E9D031AD4B011", 00:13:33.381 "nsid": 1, 00:13:33.381 "uuid": "51167cf1-81ec-4c69-872e-9d031ad4b011" 00:13:33.381 }, 00:13:33.381 { 00:13:33.381 "bdev_name": "Malloc4", 00:13:33.381 "name": "Malloc4", 00:13:33.381 "nguid": "EBACAC6E6CB44FBABB264EB97B6231E6", 00:13:33.381 "nsid": 2, 00:13:33.381 "uuid": "ebacac6e-6cb4-4fba-bb26-4eb97b6231e6" 00:13:33.381 } 00:13:33.381 ], 00:13:33.381 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:33.381 "serial_number": "SPDK2", 00:13:33.381 "subtype": "NVMe" 00:13:33.381 } 00:13:33.381 ] 00:13:33.381 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 76584 00:13:33.381 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:33.381 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 75897 00:13:33.381 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 75897 ']' 00:13:33.381 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 75897 00:13:33.381 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:13:33.382 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:33.382 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75897 00:13:33.382 killing process with pid 75897 00:13:33.382 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:33.382 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:33.382 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75897' 00:13:33.382 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 75897 00:13:33.382 [2024-05-13 18:26:49.304724] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:33.382 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 75897 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=76638 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:33.948 Process pid: 76638 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 76638' 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 76638 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 76638 ']' 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:33.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:33.948 18:26:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:33.948 [2024-05-13 18:26:49.712961] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:33.948 [2024-05-13 18:26:49.714196] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:33.948 [2024-05-13 18:26:49.714281] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.949 [2024-05-13 18:26:49.856351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.207 [2024-05-13 18:26:49.962979] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.207 [2024-05-13 18:26:49.963037] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.207 [2024-05-13 18:26:49.963049] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.207 [2024-05-13 18:26:49.963057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.207 [2024-05-13 18:26:49.963065] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.207 [2024-05-13 18:26:49.963184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.207 [2024-05-13 18:26:49.963851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.207 [2024-05-13 18:26:49.963955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.207 [2024-05-13 18:26:49.963961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.207 [2024-05-13 18:26:50.061683] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:34.207 [2024-05-13 18:26:50.061811] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:34.207 [2024-05-13 18:26:50.061877] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:34.207 [2024-05-13 18:26:50.061961] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:34.207 [2024-05-13 18:26:50.062853] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:34.773 18:26:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:34.773 18:26:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:13:34.773 18:26:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:36.148 18:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:36.148 18:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:36.148 18:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:36.148 18:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:36.148 18:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:36.148 18:26:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:36.406 Malloc1 00:13:36.406 18:26:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:36.673 18:26:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:37.239 18:26:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:37.497 [2024-05-13 18:26:53.200641] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:37.497 18:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:37.497 18:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:37.497 18:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:37.755 Malloc2 00:13:37.755 18:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:38.014 18:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:38.271 18:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:38.271 18:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:38.271 18:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 76638 00:13:38.271 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 76638 ']' 00:13:38.271 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 76638 00:13:38.271 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:13:38.271 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:38.271 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76638 00:13:38.529 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:38.529 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:38.529 killing process with pid 76638 00:13:38.529 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76638' 00:13:38.529 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 76638 00:13:38.529 [2024-05-13 18:26:54.219869] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:38.529 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 76638 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:38.788 00:13:38.788 real 0m56.524s 00:13:38.788 user 3m41.985s 00:13:38.788 sys 0m4.252s 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:38.788 ************************************ 00:13:38.788 END TEST nvmf_vfio_user 00:13:38.788 ************************************ 00:13:38.788 18:26:54 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:38.788 18:26:54 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:38.788 18:26:54 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:38.788 18:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:38.788 ************************************ 00:13:38.788 START TEST nvmf_vfio_user_nvme_compliance 00:13:38.788 ************************************ 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:38.788 * Looking for test storage... 00:13:38.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/compliance 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:38.788 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=76830 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:38.789 Process pid: 76830 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 76830' 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 76830 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 76830 ']' 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:38.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:38.789 18:26:54 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:39.047 [2024-05-13 18:26:54.753877] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:39.047 [2024-05-13 18:26:54.753988] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.047 [2024-05-13 18:26:54.893034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:39.306 [2024-05-13 18:26:55.012185] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.306 [2024-05-13 18:26:55.012411] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.306 [2024-05-13 18:26:55.012632] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.306 [2024-05-13 18:26:55.012790] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.306 [2024-05-13 18:26:55.012802] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.306 [2024-05-13 18:26:55.012936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.306 [2024-05-13 18:26:55.013324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.306 [2024-05-13 18:26:55.013337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.874 18:26:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:39.874 18:26:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:13:39.874 18:26:55 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:40.899 malloc0 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.899 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:40.900 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.900 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:41.157 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.157 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:41.157 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.157 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:41.157 [2024-05-13 18:26:56.847103] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:41.157 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.157 18:26:56 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:41.157 00:13:41.157 00:13:41.157 CUnit - A unit testing framework for C - Version 2.1-3 00:13:41.157 http://cunit.sourceforge.net/ 00:13:41.157 00:13:41.157 00:13:41.157 Suite: nvme_compliance 00:13:41.157 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-13 18:26:57.066236] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.157 [2024-05-13 18:26:57.067746] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:41.157 [2024-05-13 18:26:57.067793] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:41.157 [2024-05-13 18:26:57.067807] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:41.157 [2024-05-13 18:26:57.069245] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.415 passed 00:13:41.415 Test: admin_identify_ctrlr_verify_fused ...[2024-05-13 18:26:57.160911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.415 [2024-05-13 18:26:57.163926] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.415 passed 00:13:41.415 Test: admin_identify_ns ...[2024-05-13 18:26:57.257476] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.415 [2024-05-13 18:26:57.313606] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:41.415 [2024-05-13 18:26:57.321601] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:41.415 [2024-05-13 18:26:57.342756] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.674 passed 00:13:41.674 Test: admin_get_features_mandatory_features ...[2024-05-13 18:26:57.436015] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.674 [2024-05-13 18:26:57.439041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.674 passed 00:13:41.674 Test: admin_get_features_optional_features ...[2024-05-13 18:26:57.529554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.674 [2024-05-13 18:26:57.532594] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.674 passed 00:13:41.932 Test: admin_set_features_number_of_queues ...[2024-05-13 18:26:57.624711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.932 [2024-05-13 18:26:57.731816] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.932 passed 00:13:41.932 Test: admin_get_log_page_mandatory_logs ...[2024-05-13 18:26:57.819538] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.932 [2024-05-13 18:26:57.822552] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.932 passed 00:13:42.191 Test: admin_get_log_page_with_lpo ...[2024-05-13 18:26:57.912131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.191 [2024-05-13 18:26:57.978595] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:42.191 [2024-05-13 18:26:57.990710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.191 passed 00:13:42.191 Test: fabric_property_get ...[2024-05-13 18:26:58.079367] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.191 [2024-05-13 18:26:58.080698] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:42.191 [2024-05-13 18:26:58.084391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.191 passed 00:13:42.449 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-13 18:26:58.174862] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.449 [2024-05-13 18:26:58.176192] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:42.449 [2024-05-13 18:26:58.177915] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.449 passed 00:13:42.449 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-13 18:26:58.268053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.449 [2024-05-13 18:26:58.356593] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:42.449 [2024-05-13 18:26:58.372606] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:42.449 [2024-05-13 18:26:58.377866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.707 passed 00:13:42.707 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-13 18:26:58.466548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.707 [2024-05-13 18:26:58.467909] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:42.707 [2024-05-13 18:26:58.469569] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.707 passed 00:13:42.707 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-13 18:26:58.561318] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.707 [2024-05-13 18:26:58.634592] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:42.966 [2024-05-13 18:26:58.658590] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:42.966 [2024-05-13 18:26:58.663843] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.966 passed 00:13:42.966 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-13 18:26:58.752331] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.966 [2024-05-13 18:26:58.754016] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:42.966 [2024-05-13 18:26:58.754076] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:42.966 [2024-05-13 18:26:58.755428] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:42.966 passed 00:13:42.966 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-13 18:26:58.844167] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.227 [2024-05-13 18:26:58.946645] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:43.227 [2024-05-13 18:26:58.957670] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:43.227 [2024-05-13 18:26:58.965648] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:43.227 [2024-05-13 18:26:58.973667] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:43.227 [2024-05-13 18:26:59.005891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.227 passed 00:13:43.227 Test: admin_create_io_sq_verify_pc ...[2024-05-13 18:26:59.096785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.227 [2024-05-13 18:26:59.111653] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:43.227 [2024-05-13 18:26:59.128806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.227 passed 00:13:43.507 Test: admin_create_io_qp_max_qps ...[2024-05-13 18:26:59.222452] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:44.441 [2024-05-13 18:27:00.362682] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:45.008 [2024-05-13 18:27:00.767422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.008 passed 00:13:45.008 Test: admin_create_io_sq_shared_cq ...[2024-05-13 18:27:00.856305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.268 [2024-05-13 18:27:00.990584] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:45.268 [2024-05-13 18:27:01.026725] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.268 passed 00:13:45.268 00:13:45.268 Run Summary: Type Total Ran Passed Failed Inactive 00:13:45.268 suites 1 1 n/a 0 0 00:13:45.268 tests 18 18 18 0 0 00:13:45.268 asserts 360 360 360 0 n/a 00:13:45.268 00:13:45.268 Elapsed time = 1.661 seconds 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 76830 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 76830 ']' 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 76830 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76830 00:13:45.268 killing process with pid 76830 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76830' 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 76830 00:13:45.268 [2024-05-13 18:27:01.107540] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:45.268 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 76830 00:13:45.527 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:45.527 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:45.527 00:13:45.527 real 0m6.807s 00:13:45.527 user 0m19.066s 00:13:45.527 sys 0m0.579s 00:13:45.527 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:45.527 18:27:01 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.527 ************************************ 00:13:45.527 END TEST nvmf_vfio_user_nvme_compliance 00:13:45.527 ************************************ 00:13:45.527 18:27:01 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:45.527 18:27:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:45.527 18:27:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:45.527 18:27:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.527 ************************************ 00:13:45.527 START TEST nvmf_vfio_user_fuzz 00:13:45.527 ************************************ 00:13:45.527 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:45.784 * Looking for test storage... 00:13:45.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.784 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=76977 00:13:45.785 Process pid: 76977 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 76977' 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 76977 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 76977 ']' 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:45.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:45.785 18:27:01 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:46.793 18:27:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:46.793 18:27:02 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:13:46.793 18:27:02 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:47.728 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:47.728 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.728 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.728 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.728 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:47.728 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:47.729 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.729 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.991 malloc0 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:47.991 18:27:03 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:48.249 Shutting down the fuzz application 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 76977 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 76977 ']' 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 76977 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76977 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:48.249 killing process with pid 76977 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76977' 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 76977 00:13:48.249 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 76977 00:13:48.816 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_log.txt /home/vagrant/spdk_repo/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:48.816 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:48.816 00:13:48.816 real 0m3.018s 00:13:48.816 user 0m3.396s 00:13:48.816 sys 0m0.414s 00:13:48.816 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:48.816 18:27:04 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:48.816 ************************************ 00:13:48.816 END TEST nvmf_vfio_user_fuzz 00:13:48.816 ************************************ 00:13:48.816 18:27:04 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.816 18:27:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:48.816 18:27:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:48.816 18:27:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:48.816 ************************************ 00:13:48.816 START TEST nvmf_host_management 00:13:48.816 ************************************ 00:13:48.816 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:48.816 * Looking for test storage... 00:13:48.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:48.816 18:27:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:48.817 Cannot find device "nvmf_tgt_br" 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@155 -- # true 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:48.817 Cannot find device "nvmf_tgt_br2" 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@156 -- # true 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:48.817 Cannot find device "nvmf_tgt_br" 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@158 -- # true 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:48.817 Cannot find device "nvmf_tgt_br2" 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@159 -- # true 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:48.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:48.817 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:48.817 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:49.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:49.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:13:49.076 00:13:49.076 --- 10.0.0.2 ping statistics --- 00:13:49.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.076 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:49.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:49.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:13:49.076 00:13:49.076 --- 10.0.0.3 ping statistics --- 00:13:49.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.076 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:49.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:49.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.056 ms 00:13:49.076 00:13:49.076 --- 10.0.0.1 ping statistics --- 00:13:49.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:49.076 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@433 -- # return 0 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:49.076 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=77210 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 77210 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 77210 ']' 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:49.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:49.077 18:27:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:49.335 [2024-05-13 18:27:05.049339] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:49.335 [2024-05-13 18:27:05.049454] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:49.335 [2024-05-13 18:27:05.189228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:49.593 [2024-05-13 18:27:05.319066] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:49.593 [2024-05-13 18:27:05.319126] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:49.593 [2024-05-13 18:27:05.319138] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:49.593 [2024-05-13 18:27:05.319146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:49.593 [2024-05-13 18:27:05.319153] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:49.593 [2024-05-13 18:27:05.319320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:49.593 [2024-05-13 18:27:05.319445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:49.593 [2024-05-13 18:27:05.319538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.593 [2024-05-13 18:27:05.319548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.160 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.160 [2024-05-13 18:27:06.101810] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.417 Malloc0 00:13:50.417 [2024-05-13 18:27:06.185085] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:50.417 [2024-05-13 18:27:06.185358] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=77282 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 77282 /var/tmp/bdevperf.sock 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 77282 ']' 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:50.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:50.417 { 00:13:50.417 "params": { 00:13:50.417 "name": "Nvme$subsystem", 00:13:50.417 "trtype": "$TEST_TRANSPORT", 00:13:50.417 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:50.417 "adrfam": "ipv4", 00:13:50.417 "trsvcid": "$NVMF_PORT", 00:13:50.417 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:50.417 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:50.417 "hdgst": ${hdgst:-false}, 00:13:50.417 "ddgst": ${ddgst:-false} 00:13:50.417 }, 00:13:50.417 "method": "bdev_nvme_attach_controller" 00:13:50.417 } 00:13:50.417 EOF 00:13:50.417 )") 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:50.417 18:27:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:50.417 "params": { 00:13:50.417 "name": "Nvme0", 00:13:50.417 "trtype": "tcp", 00:13:50.417 "traddr": "10.0.0.2", 00:13:50.417 "adrfam": "ipv4", 00:13:50.417 "trsvcid": "4420", 00:13:50.418 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:50.418 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:50.418 "hdgst": false, 00:13:50.418 "ddgst": false 00:13:50.418 }, 00:13:50.418 "method": "bdev_nvme_attach_controller" 00:13:50.418 }' 00:13:50.418 [2024-05-13 18:27:06.294808] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:50.418 [2024-05-13 18:27:06.295503] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77282 ] 00:13:50.675 [2024-05-13 18:27:06.437458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:50.675 [2024-05-13 18:27:06.563616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:50.934 Running I/O for 10 seconds... 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.502 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:51.502 [2024-05-13 18:27:07.413498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.502 [2024-05-13 18:27:07.413549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.502 [2024-05-13 18:27:07.413565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.503 [2024-05-13 18:27:07.413588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.413600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.503 [2024-05-13 18:27:07.413610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.413621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:13:51.503 [2024-05-13 18:27:07.413630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.413640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187b200 is same with the state(5) to be set 00:13:51.503 18:27:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.503 18:27:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:13:51.503 [2024-05-13 18:27:07.424207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187b200 (9): Bad file descriptor 00:13:51.503 [2024-05-13 18:27:07.424296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.424984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.424994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.425005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.425014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.425025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.425034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.425045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.425054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.425065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.425075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.503 [2024-05-13 18:27:07.425086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.503 [2024-05-13 18:27:07.425096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:51.504 [2024-05-13 18:27:07.425687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:51.504 [2024-05-13 18:27:07.425765] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x187a990 was disconnected and freed. reset controller. 00:13:51.504 [2024-05-13 18:27:07.426890] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:51.504 task offset: 0 on job bdev=Nvme0n1 fails 00:13:51.504 00:13:51.504 Latency(us) 00:13:51.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:51.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:51.504 Job: Nvme0n1 ended in about 0.68 seconds with error 00:13:51.504 Verification LBA range: start 0x0 length 0x400 00:13:51.504 Nvme0n1 : 0.68 1508.75 94.30 94.30 0.00 38934.19 1899.05 36223.53 00:13:51.504 =================================================================================================================== 00:13:51.504 Total : 1508.75 94.30 94.30 0.00 38934.19 1899.05 36223.53 00:13:51.504 [2024-05-13 18:27:07.428827] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:51.504 [2024-05-13 18:27:07.437842] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 77282 00:13:52.880 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (77282) - No such process 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:52.880 { 00:13:52.880 "params": { 00:13:52.880 "name": "Nvme$subsystem", 00:13:52.880 "trtype": "$TEST_TRANSPORT", 00:13:52.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:52.880 "adrfam": "ipv4", 00:13:52.880 "trsvcid": "$NVMF_PORT", 00:13:52.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:52.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:52.880 "hdgst": ${hdgst:-false}, 00:13:52.880 "ddgst": ${ddgst:-false} 00:13:52.880 }, 00:13:52.880 "method": "bdev_nvme_attach_controller" 00:13:52.880 } 00:13:52.880 EOF 00:13:52.880 )") 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:13:52.880 18:27:08 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:52.880 "params": { 00:13:52.880 "name": "Nvme0", 00:13:52.880 "trtype": "tcp", 00:13:52.880 "traddr": "10.0.0.2", 00:13:52.880 "adrfam": "ipv4", 00:13:52.880 "trsvcid": "4420", 00:13:52.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:52.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:52.880 "hdgst": false, 00:13:52.881 "ddgst": false 00:13:52.881 }, 00:13:52.881 "method": "bdev_nvme_attach_controller" 00:13:52.881 }' 00:13:52.881 [2024-05-13 18:27:08.479516] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:52.881 [2024-05-13 18:27:08.480095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77332 ] 00:13:52.881 [2024-05-13 18:27:08.619323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.881 [2024-05-13 18:27:08.735259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.139 Running I/O for 1 seconds... 00:13:54.076 00:13:54.076 Latency(us) 00:13:54.076 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.076 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:54.076 Verification LBA range: start 0x0 length 0x400 00:13:54.076 Nvme0n1 : 1.00 1594.27 99.64 0.00 0.00 39339.76 5659.93 36700.16 00:13:54.076 =================================================================================================================== 00:13:54.076 Total : 1594.27 99.64 0.00 0.00 39339.76 5659.93 36700.16 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:54.341 rmmod nvme_tcp 00:13:54.341 rmmod nvme_fabrics 00:13:54.341 rmmod nvme_keyring 00:13:54.341 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 77210 ']' 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 77210 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 77210 ']' 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 77210 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77210 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:54.600 killing process with pid 77210 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77210' 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 77210 00:13:54.600 [2024-05-13 18:27:10.310869] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:54.600 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 77210 00:13:54.859 [2024-05-13 18:27:10.565987] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:54.859 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:54.859 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:13:54.860 00:13:54.860 real 0m6.123s 00:13:54.860 user 0m23.952s 00:13:54.860 sys 0m1.382s 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:54.860 18:27:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 ************************************ 00:13:54.860 END TEST nvmf_host_management 00:13:54.860 ************************************ 00:13:54.860 18:27:10 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:54.860 18:27:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:54.860 18:27:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:54.860 18:27:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:54.860 ************************************ 00:13:54.860 START TEST nvmf_lvol 00:13:54.860 ************************************ 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:54.860 * Looking for test storage... 00:13:54.860 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@432 -- # nvmf_veth_init 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:54.860 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:55.119 Cannot find device "nvmf_tgt_br" 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@155 -- # true 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:55.119 Cannot find device "nvmf_tgt_br2" 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@156 -- # true 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:55.119 Cannot find device "nvmf_tgt_br" 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@158 -- # true 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:55.119 Cannot find device "nvmf_tgt_br2" 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@159 -- # true 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:55.119 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.119 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:55.120 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:55.120 18:27:10 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:55.120 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:55.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:13:55.379 00:13:55.379 --- 10.0.0.2 ping statistics --- 00:13:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.379 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:55.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:55.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:13:55.379 00:13:55.379 --- 10.0.0.3 ping statistics --- 00:13:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.379 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:55.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:13:55.379 00:13:55.379 --- 10.0.0.1 ping statistics --- 00:13:55.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.379 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@433 -- # return 0 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=77546 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 77546 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 77546 ']' 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:55.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:55.379 18:27:11 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:55.379 [2024-05-13 18:27:11.189347] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:13:55.379 [2024-05-13 18:27:11.189450] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.638 [2024-05-13 18:27:11.332015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:55.638 [2024-05-13 18:27:11.463379] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.638 [2024-05-13 18:27:11.463454] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.638 [2024-05-13 18:27:11.463468] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.638 [2024-05-13 18:27:11.463479] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.638 [2024-05-13 18:27:11.463488] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.638 [2024-05-13 18:27:11.463632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:55.638 [2024-05-13 18:27:11.464697] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:55.638 [2024-05-13 18:27:11.464712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.205 18:27:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:56.205 18:27:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:13:56.205 18:27:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.205 18:27:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:56.205 18:27:12 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:13:56.464 18:27:12 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.464 18:27:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:56.722 [2024-05-13 18:27:12.426200] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.722 18:27:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:56.981 18:27:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:56.981 18:27:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:57.240 18:27:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:57.240 18:27:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:57.506 18:27:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:57.779 18:27:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=977ae503-6bc5-4399-a700-e3839ec50ff3 00:13:57.779 18:27:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 977ae503-6bc5-4399-a700-e3839ec50ff3 lvol 20 00:13:58.037 18:27:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=4e6e4809-765f-478a-8f22-8f46aec5e6b1 00:13:58.037 18:27:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:58.295 18:27:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4e6e4809-765f-478a-8f22-8f46aec5e6b1 00:13:58.553 18:27:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:58.812 [2024-05-13 18:27:14.497673] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:58.812 [2024-05-13 18:27:14.498282] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.812 18:27:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:59.070 18:27:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:59.070 18:27:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=77689 00:13:59.070 18:27:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:00.006 18:27:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 4e6e4809-765f-478a-8f22-8f46aec5e6b1 MY_SNAPSHOT 00:14:00.265 18:27:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=99e030ef-f036-4d8c-88d8-5824ef1f4482 00:14:00.265 18:27:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 4e6e4809-765f-478a-8f22-8f46aec5e6b1 30 00:14:00.524 18:27:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 99e030ef-f036-4d8c-88d8-5824ef1f4482 MY_CLONE 00:14:00.783 18:27:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b206582e-e2ff-49eb-809f-df08cd409004 00:14:00.783 18:27:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate b206582e-e2ff-49eb-809f-df08cd409004 00:14:01.351 18:27:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 77689 00:14:09.520 Initializing NVMe Controllers 00:14:09.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:09.520 Controller IO queue size 128, less than required. 00:14:09.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:09.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:09.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:09.520 Initialization complete. Launching workers. 00:14:09.520 ======================================================== 00:14:09.520 Latency(us) 00:14:09.520 Device Information : IOPS MiB/s Average min max 00:14:09.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10681.90 41.73 11989.65 2250.93 76658.78 00:14:09.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10898.20 42.57 11744.49 469.68 67577.77 00:14:09.520 ======================================================== 00:14:09.520 Total : 21580.10 84.30 11865.84 469.68 76658.78 00:14:09.520 00:14:09.520 18:27:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:09.520 18:27:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 4e6e4809-765f-478a-8f22-8f46aec5e6b1 00:14:09.779 18:27:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 977ae503-6bc5-4399-a700-e3839ec50ff3 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.038 rmmod nvme_tcp 00:14:10.038 rmmod nvme_fabrics 00:14:10.038 rmmod nvme_keyring 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 77546 ']' 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 77546 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 77546 ']' 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 77546 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77546 00:14:10.038 killing process with pid 77546 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77546' 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 77546 00:14:10.038 [2024-05-13 18:27:25.959944] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:10.038 18:27:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 77546 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:10.606 00:14:10.606 real 0m15.625s 00:14:10.606 user 1m5.107s 00:14:10.606 sys 0m3.897s 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:10.606 ************************************ 00:14:10.606 END TEST nvmf_lvol 00:14:10.606 ************************************ 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:10.606 18:27:26 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:10.606 18:27:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:10.606 18:27:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:10.606 18:27:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.606 ************************************ 00:14:10.606 START TEST nvmf_lvs_grow 00:14:10.606 ************************************ 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:10.606 * Looking for test storage... 00:14:10.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.606 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:10.607 Cannot find device "nvmf_tgt_br" 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@155 -- # true 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:10.607 Cannot find device "nvmf_tgt_br2" 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@156 -- # true 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:10.607 Cannot find device "nvmf_tgt_br" 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@158 -- # true 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:10.607 Cannot find device "nvmf_tgt_br2" 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@159 -- # true 00:14:10.607 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:10.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:10.866 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:10.866 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:10.867 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:10.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:10.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:14:10.867 00:14:10.867 --- 10.0.0.2 ping statistics --- 00:14:10.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.867 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:14:10.867 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:10.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:10.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:14:10.867 00:14:10.867 --- 10.0.0.3 ping statistics --- 00:14:10.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:10.867 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:14:10.867 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:11.125 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:11.126 00:14:11.126 --- 10.0.0.1 ping statistics --- 00:14:11.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.126 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@433 -- # return 0 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=78048 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 78048 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 78048 ']' 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:11.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:11.126 18:27:26 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:11.126 [2024-05-13 18:27:26.900106] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:11.126 [2024-05-13 18:27:26.900210] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.126 [2024-05-13 18:27:27.040666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.384 [2024-05-13 18:27:27.169923] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.384 [2024-05-13 18:27:27.169984] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.384 [2024-05-13 18:27:27.169998] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.384 [2024-05-13 18:27:27.170009] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.384 [2024-05-13 18:27:27.170018] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.384 [2024-05-13 18:27:27.170049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.320 18:27:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:12.320 18:27:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:14:12.320 18:27:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:12.320 18:27:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:12.320 18:27:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:12.320 18:27:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.320 18:27:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:12.320 [2024-05-13 18:27:28.190259] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:12.320 ************************************ 00:14:12.320 START TEST lvs_grow_clean 00:14:12.320 ************************************ 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:12.320 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:12.579 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:12.579 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:12.897 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:12.897 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:12.897 18:27:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:13.173 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:13.173 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:13.173 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 lvol 150 00:14:13.432 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bf893609-1593-464a-ae96-ebf8dc1222f5 00:14:13.432 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:13.432 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:13.693 [2024-05-13 18:27:29.609164] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:13.693 [2024-05-13 18:27:29.609246] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:13.693 true 00:14:13.951 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:13.951 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:14.210 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:14.210 18:27:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:14.469 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bf893609-1593-464a-ae96-ebf8dc1222f5 00:14:14.469 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:14.728 [2024-05-13 18:27:30.600019] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:14.728 [2024-05-13 18:27:30.600324] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.728 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78215 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78215 /var/tmp/bdevperf.sock 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 78215 ']' 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:14.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:14.986 18:27:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:14.986 [2024-05-13 18:27:30.907894] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:14.986 [2024-05-13 18:27:30.907996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78215 ] 00:14:15.245 [2024-05-13 18:27:31.036963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.245 [2024-05-13 18:27:31.153286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.180 18:27:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:16.180 18:27:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:14:16.180 18:27:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:16.440 Nvme0n1 00:14:16.440 18:27:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:16.699 [ 00:14:16.699 { 00:14:16.699 "aliases": [ 00:14:16.699 "bf893609-1593-464a-ae96-ebf8dc1222f5" 00:14:16.699 ], 00:14:16.699 "assigned_rate_limits": { 00:14:16.699 "r_mbytes_per_sec": 0, 00:14:16.699 "rw_ios_per_sec": 0, 00:14:16.699 "rw_mbytes_per_sec": 0, 00:14:16.699 "w_mbytes_per_sec": 0 00:14:16.699 }, 00:14:16.699 "block_size": 4096, 00:14:16.699 "claimed": false, 00:14:16.699 "driver_specific": { 00:14:16.699 "mp_policy": "active_passive", 00:14:16.699 "nvme": [ 00:14:16.699 { 00:14:16.699 "ctrlr_data": { 00:14:16.699 "ana_reporting": false, 00:14:16.699 "cntlid": 1, 00:14:16.699 "firmware_revision": "24.05", 00:14:16.699 "model_number": "SPDK bdev Controller", 00:14:16.699 "multi_ctrlr": true, 00:14:16.699 "oacs": { 00:14:16.699 "firmware": 0, 00:14:16.699 "format": 0, 00:14:16.699 "ns_manage": 0, 00:14:16.699 "security": 0 00:14:16.699 }, 00:14:16.699 "serial_number": "SPDK0", 00:14:16.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.699 "vendor_id": "0x8086" 00:14:16.699 }, 00:14:16.699 "ns_data": { 00:14:16.699 "can_share": true, 00:14:16.699 "id": 1 00:14:16.699 }, 00:14:16.699 "trid": { 00:14:16.699 "adrfam": "IPv4", 00:14:16.699 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:16.699 "traddr": "10.0.0.2", 00:14:16.699 "trsvcid": "4420", 00:14:16.699 "trtype": "TCP" 00:14:16.699 }, 00:14:16.699 "vs": { 00:14:16.699 "nvme_version": "1.3" 00:14:16.699 } 00:14:16.699 } 00:14:16.699 ] 00:14:16.700 }, 00:14:16.700 "memory_domains": [ 00:14:16.700 { 00:14:16.700 "dma_device_id": "system", 00:14:16.700 "dma_device_type": 1 00:14:16.700 } 00:14:16.700 ], 00:14:16.700 "name": "Nvme0n1", 00:14:16.700 "num_blocks": 38912, 00:14:16.700 "product_name": "NVMe disk", 00:14:16.700 "supported_io_types": { 00:14:16.700 "abort": true, 00:14:16.700 "compare": true, 00:14:16.700 "compare_and_write": true, 00:14:16.700 "flush": true, 00:14:16.700 "nvme_admin": true, 00:14:16.700 "nvme_io": true, 00:14:16.700 "read": true, 00:14:16.700 "reset": true, 00:14:16.700 "unmap": true, 00:14:16.700 "write": true, 00:14:16.700 "write_zeroes": true 00:14:16.700 }, 00:14:16.700 "uuid": "bf893609-1593-464a-ae96-ebf8dc1222f5", 00:14:16.700 "zoned": false 00:14:16.700 } 00:14:16.700 ] 00:14:16.700 18:27:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78263 00:14:16.700 18:27:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:16.700 18:27:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:16.700 Running I/O for 10 seconds... 00:14:17.634 Latency(us) 00:14:17.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:17.634 Nvme0n1 : 1.00 8276.00 32.33 0.00 0.00 0.00 0.00 0.00 00:14:17.634 =================================================================================================================== 00:14:17.634 Total : 8276.00 32.33 0.00 0.00 0.00 0.00 0.00 00:14:17.634 00:14:18.569 18:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:18.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:18.828 Nvme0n1 : 2.00 8269.00 32.30 0.00 0.00 0.00 0.00 0.00 00:14:18.828 =================================================================================================================== 00:14:18.828 Total : 8269.00 32.30 0.00 0.00 0.00 0.00 0.00 00:14:18.828 00:14:18.828 true 00:14:19.087 18:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:19.087 18:27:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:19.345 18:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:19.345 18:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:19.345 18:27:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 78263 00:14:19.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:19.912 Nvme0n1 : 3.00 8380.67 32.74 0.00 0.00 0.00 0.00 0.00 00:14:19.912 =================================================================================================================== 00:14:19.912 Total : 8380.67 32.74 0.00 0.00 0.00 0.00 0.00 00:14:19.912 00:14:20.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:20.848 Nvme0n1 : 4.00 8378.25 32.73 0.00 0.00 0.00 0.00 0.00 00:14:20.848 =================================================================================================================== 00:14:20.848 Total : 8378.25 32.73 0.00 0.00 0.00 0.00 0.00 00:14:20.848 00:14:21.796 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:21.796 Nvme0n1 : 5.00 8315.60 32.48 0.00 0.00 0.00 0.00 0.00 00:14:21.796 =================================================================================================================== 00:14:21.796 Total : 8315.60 32.48 0.00 0.00 0.00 0.00 0.00 00:14:21.796 00:14:22.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:22.764 Nvme0n1 : 6.00 8284.33 32.36 0.00 0.00 0.00 0.00 0.00 00:14:22.764 =================================================================================================================== 00:14:22.764 Total : 8284.33 32.36 0.00 0.00 0.00 0.00 0.00 00:14:22.764 00:14:23.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:23.696 Nvme0n1 : 7.00 8231.14 32.15 0.00 0.00 0.00 0.00 0.00 00:14:23.696 =================================================================================================================== 00:14:23.696 Total : 8231.14 32.15 0.00 0.00 0.00 0.00 0.00 00:14:23.696 00:14:24.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.629 Nvme0n1 : 8.00 8211.88 32.08 0.00 0.00 0.00 0.00 0.00 00:14:24.629 =================================================================================================================== 00:14:24.629 Total : 8211.88 32.08 0.00 0.00 0.00 0.00 0.00 00:14:24.629 00:14:26.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.004 Nvme0n1 : 9.00 8207.00 32.06 0.00 0.00 0.00 0.00 0.00 00:14:26.004 =================================================================================================================== 00:14:26.004 Total : 8207.00 32.06 0.00 0.00 0.00 0.00 0.00 00:14:26.004 00:14:26.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.940 Nvme0n1 : 10.00 8216.00 32.09 0.00 0.00 0.00 0.00 0.00 00:14:26.940 =================================================================================================================== 00:14:26.940 Total : 8216.00 32.09 0.00 0.00 0.00 0.00 0.00 00:14:26.940 00:14:26.940 00:14:26.940 Latency(us) 00:14:26.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.940 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.940 Nvme0n1 : 10.01 8221.32 32.11 0.00 0.00 15564.51 7477.06 45994.36 00:14:26.940 =================================================================================================================== 00:14:26.940 Total : 8221.32 32.11 0.00 0.00 15564.51 7477.06 45994.36 00:14:26.940 0 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78215 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 78215 ']' 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 78215 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78215 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:26.940 killing process with pid 78215 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78215' 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 78215 00:14:26.940 Received shutdown signal, test time was about 10.000000 seconds 00:14:26.940 00:14:26.940 Latency(us) 00:14:26.940 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:26.940 =================================================================================================================== 00:14:26.940 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 78215 00:14:26.940 18:27:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:27.506 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:27.506 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:27.506 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:27.764 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:27.764 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:14:27.764 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:28.021 [2024-05-13 18:27:43.858210] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:28.021 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:28.021 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:14:28.021 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:28.021 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.021 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.021 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.022 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.022 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.022 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:28.022 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:28.022 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:28.022 18:27:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:28.279 2024/05/13 18:27:44 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:9bad44ee-8aa3-4ad5-9054-1917bc4b0788], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:28.279 request: 00:14:28.279 { 00:14:28.279 "method": "bdev_lvol_get_lvstores", 00:14:28.279 "params": { 00:14:28.279 "uuid": "9bad44ee-8aa3-4ad5-9054-1917bc4b0788" 00:14:28.279 } 00:14:28.279 } 00:14:28.279 Got JSON-RPC error response 00:14:28.279 GoRPCClient: error on JSON-RPC call 00:14:28.279 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:14:28.279 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:28.279 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:28.279 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:28.279 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:28.536 aio_bdev 00:14:28.794 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bf893609-1593-464a-ae96-ebf8dc1222f5 00:14:28.794 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=bf893609-1593-464a-ae96-ebf8dc1222f5 00:14:28.794 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:28.794 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:14:28.794 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:28.794 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:28.794 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:29.053 18:27:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bf893609-1593-464a-ae96-ebf8dc1222f5 -t 2000 00:14:29.311 [ 00:14:29.311 { 00:14:29.311 "aliases": [ 00:14:29.311 "lvs/lvol" 00:14:29.311 ], 00:14:29.311 "assigned_rate_limits": { 00:14:29.311 "r_mbytes_per_sec": 0, 00:14:29.311 "rw_ios_per_sec": 0, 00:14:29.311 "rw_mbytes_per_sec": 0, 00:14:29.311 "w_mbytes_per_sec": 0 00:14:29.311 }, 00:14:29.311 "block_size": 4096, 00:14:29.311 "claimed": false, 00:14:29.311 "driver_specific": { 00:14:29.311 "lvol": { 00:14:29.311 "base_bdev": "aio_bdev", 00:14:29.311 "clone": false, 00:14:29.311 "esnap_clone": false, 00:14:29.311 "lvol_store_uuid": "9bad44ee-8aa3-4ad5-9054-1917bc4b0788", 00:14:29.311 "num_allocated_clusters": 38, 00:14:29.311 "snapshot": false, 00:14:29.311 "thin_provision": false 00:14:29.311 } 00:14:29.311 }, 00:14:29.311 "name": "bf893609-1593-464a-ae96-ebf8dc1222f5", 00:14:29.311 "num_blocks": 38912, 00:14:29.311 "product_name": "Logical Volume", 00:14:29.311 "supported_io_types": { 00:14:29.311 "abort": false, 00:14:29.311 "compare": false, 00:14:29.311 "compare_and_write": false, 00:14:29.311 "flush": false, 00:14:29.311 "nvme_admin": false, 00:14:29.311 "nvme_io": false, 00:14:29.311 "read": true, 00:14:29.311 "reset": true, 00:14:29.311 "unmap": true, 00:14:29.311 "write": true, 00:14:29.311 "write_zeroes": true 00:14:29.311 }, 00:14:29.311 "uuid": "bf893609-1593-464a-ae96-ebf8dc1222f5", 00:14:29.311 "zoned": false 00:14:29.311 } 00:14:29.311 ] 00:14:29.311 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:14:29.311 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:29.311 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:29.569 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:29.569 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:29.569 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:29.828 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:29.828 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bf893609-1593-464a-ae96-ebf8dc1222f5 00:14:30.086 18:27:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9bad44ee-8aa3-4ad5-9054-1917bc4b0788 00:14:30.345 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:30.603 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:30.861 ************************************ 00:14:30.861 END TEST lvs_grow_clean 00:14:30.861 ************************************ 00:14:30.861 00:14:30.861 real 0m18.564s 00:14:30.861 user 0m17.816s 00:14:30.861 sys 0m2.202s 00:14:30.861 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:30.861 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:31.119 ************************************ 00:14:31.119 START TEST lvs_grow_dirty 00:14:31.119 ************************************ 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:31.119 18:27:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:31.378 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:31.378 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:31.636 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:31.636 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:31.636 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:31.895 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:31.895 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:31.895 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 lvol 150 00:14:32.153 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=22ecf5f2-f3b7-4d08-a5db-5e45536be497 00:14:32.154 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:32.154 18:27:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:32.423 [2024-05-13 18:27:48.176502] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:32.423 [2024-05-13 18:27:48.176596] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:32.423 true 00:14:32.423 18:27:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:32.423 18:27:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:32.682 18:27:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:32.682 18:27:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:32.951 18:27:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 22ecf5f2-f3b7-4d08-a5db-5e45536be497 00:14:33.211 18:27:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:33.482 [2024-05-13 18:27:49.221123] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.482 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=78657 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 78657 /var/tmp/bdevperf.sock 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 78657 ']' 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:33.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:33.744 18:27:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:33.744 [2024-05-13 18:27:49.602176] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:33.744 [2024-05-13 18:27:49.602347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78657 ] 00:14:34.002 [2024-05-13 18:27:49.747297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.002 [2024-05-13 18:27:49.860317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.571 18:27:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:34.571 18:27:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:34.571 18:27:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:35.138 Nvme0n1 00:14:35.139 18:27:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:35.396 [ 00:14:35.396 { 00:14:35.396 "aliases": [ 00:14:35.396 "22ecf5f2-f3b7-4d08-a5db-5e45536be497" 00:14:35.396 ], 00:14:35.396 "assigned_rate_limits": { 00:14:35.396 "r_mbytes_per_sec": 0, 00:14:35.396 "rw_ios_per_sec": 0, 00:14:35.396 "rw_mbytes_per_sec": 0, 00:14:35.396 "w_mbytes_per_sec": 0 00:14:35.396 }, 00:14:35.396 "block_size": 4096, 00:14:35.396 "claimed": false, 00:14:35.396 "driver_specific": { 00:14:35.396 "mp_policy": "active_passive", 00:14:35.396 "nvme": [ 00:14:35.396 { 00:14:35.396 "ctrlr_data": { 00:14:35.396 "ana_reporting": false, 00:14:35.396 "cntlid": 1, 00:14:35.396 "firmware_revision": "24.05", 00:14:35.396 "model_number": "SPDK bdev Controller", 00:14:35.396 "multi_ctrlr": true, 00:14:35.396 "oacs": { 00:14:35.396 "firmware": 0, 00:14:35.396 "format": 0, 00:14:35.396 "ns_manage": 0, 00:14:35.396 "security": 0 00:14:35.396 }, 00:14:35.396 "serial_number": "SPDK0", 00:14:35.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.396 "vendor_id": "0x8086" 00:14:35.396 }, 00:14:35.396 "ns_data": { 00:14:35.396 "can_share": true, 00:14:35.396 "id": 1 00:14:35.396 }, 00:14:35.396 "trid": { 00:14:35.396 "adrfam": "IPv4", 00:14:35.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:35.396 "traddr": "10.0.0.2", 00:14:35.396 "trsvcid": "4420", 00:14:35.396 "trtype": "TCP" 00:14:35.396 }, 00:14:35.396 "vs": { 00:14:35.396 "nvme_version": "1.3" 00:14:35.396 } 00:14:35.396 } 00:14:35.396 ] 00:14:35.396 }, 00:14:35.396 "memory_domains": [ 00:14:35.396 { 00:14:35.396 "dma_device_id": "system", 00:14:35.396 "dma_device_type": 1 00:14:35.396 } 00:14:35.396 ], 00:14:35.396 "name": "Nvme0n1", 00:14:35.396 "num_blocks": 38912, 00:14:35.396 "product_name": "NVMe disk", 00:14:35.396 "supported_io_types": { 00:14:35.396 "abort": true, 00:14:35.396 "compare": true, 00:14:35.396 "compare_and_write": true, 00:14:35.396 "flush": true, 00:14:35.396 "nvme_admin": true, 00:14:35.396 "nvme_io": true, 00:14:35.396 "read": true, 00:14:35.396 "reset": true, 00:14:35.396 "unmap": true, 00:14:35.396 "write": true, 00:14:35.396 "write_zeroes": true 00:14:35.396 }, 00:14:35.396 "uuid": "22ecf5f2-f3b7-4d08-a5db-5e45536be497", 00:14:35.396 "zoned": false 00:14:35.396 } 00:14:35.396 ] 00:14:35.396 18:27:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=78705 00:14:35.396 18:27:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:35.396 18:27:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:35.396 Running I/O for 10 seconds... 00:14:36.326 Latency(us) 00:14:36.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.326 Nvme0n1 : 1.00 8783.00 34.31 0.00 0.00 0.00 0.00 0.00 00:14:36.326 =================================================================================================================== 00:14:36.326 Total : 8783.00 34.31 0.00 0.00 0.00 0.00 0.00 00:14:36.326 00:14:37.260 18:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:37.517 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:37.517 Nvme0n1 : 2.00 8342.50 32.59 0.00 0.00 0.00 0.00 0.00 00:14:37.517 =================================================================================================================== 00:14:37.517 Total : 8342.50 32.59 0.00 0.00 0.00 0.00 0.00 00:14:37.517 00:14:37.775 true 00:14:37.775 18:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:37.775 18:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:38.034 18:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:38.034 18:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:38.034 18:27:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 78705 00:14:38.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:38.600 Nvme0n1 : 3.00 8101.33 31.65 0.00 0.00 0.00 0.00 0.00 00:14:38.600 =================================================================================================================== 00:14:38.600 Total : 8101.33 31.65 0.00 0.00 0.00 0.00 0.00 00:14:38.600 00:14:39.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:39.534 Nvme0n1 : 4.00 8185.50 31.97 0.00 0.00 0.00 0.00 0.00 00:14:39.534 =================================================================================================================== 00:14:39.534 Total : 8185.50 31.97 0.00 0.00 0.00 0.00 0.00 00:14:39.534 00:14:40.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:40.467 Nvme0n1 : 5.00 8225.60 32.13 0.00 0.00 0.00 0.00 0.00 00:14:40.467 =================================================================================================================== 00:14:40.467 Total : 8225.60 32.13 0.00 0.00 0.00 0.00 0.00 00:14:40.467 00:14:41.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:41.402 Nvme0n1 : 6.00 7762.83 30.32 0.00 0.00 0.00 0.00 0.00 00:14:41.402 =================================================================================================================== 00:14:41.402 Total : 7762.83 30.32 0.00 0.00 0.00 0.00 0.00 00:14:41.402 00:14:42.335 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:42.335 Nvme0n1 : 7.00 7787.57 30.42 0.00 0.00 0.00 0.00 0.00 00:14:42.335 =================================================================================================================== 00:14:42.335 Total : 7787.57 30.42 0.00 0.00 0.00 0.00 0.00 00:14:42.335 00:14:43.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:43.775 Nvme0n1 : 8.00 7813.62 30.52 0.00 0.00 0.00 0.00 0.00 00:14:43.775 =================================================================================================================== 00:14:43.775 Total : 7813.62 30.52 0.00 0.00 0.00 0.00 0.00 00:14:43.775 00:14:44.342 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.342 Nvme0n1 : 9.00 7827.00 30.57 0.00 0.00 0.00 0.00 0.00 00:14:44.342 =================================================================================================================== 00:14:44.342 Total : 7827.00 30.57 0.00 0.00 0.00 0.00 0.00 00:14:44.342 00:14:45.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.717 Nvme0n1 : 10.00 7850.70 30.67 0.00 0.00 0.00 0.00 0.00 00:14:45.717 =================================================================================================================== 00:14:45.717 Total : 7850.70 30.67 0.00 0.00 0.00 0.00 0.00 00:14:45.717 00:14:45.717 00:14:45.717 Latency(us) 00:14:45.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.717 Nvme0n1 : 10.00 7859.86 30.70 0.00 0.00 16281.16 5779.08 350796.33 00:14:45.717 =================================================================================================================== 00:14:45.717 Total : 7859.86 30.70 0.00 0.00 16281.16 5779.08 350796.33 00:14:45.717 0 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 78657 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 78657 ']' 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 78657 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78657 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:45.717 killing process with pid 78657 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78657' 00:14:45.717 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 78657 00:14:45.717 Received shutdown signal, test time was about 10.000000 seconds 00:14:45.717 00:14:45.717 Latency(us) 00:14:45.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:45.718 =================================================================================================================== 00:14:45.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:45.718 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 78657 00:14:45.718 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.976 18:28:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:46.235 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:46.235 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:14:46.493 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:14:46.493 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:14:46.493 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 78048 00:14:46.493 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 78048 00:14:46.752 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 78048 Killed "${NVMF_APP[@]}" "$@" 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=78869 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 78869 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 78869 ']' 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:46.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:46.752 18:28:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:46.753 [2024-05-13 18:28:02.521979] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:46.753 [2024-05-13 18:28:02.522098] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.753 [2024-05-13 18:28:02.659416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.011 [2024-05-13 18:28:02.778434] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.011 [2024-05-13 18:28:02.778496] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.011 [2024-05-13 18:28:02.778508] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.011 [2024-05-13 18:28:02.778517] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.011 [2024-05-13 18:28:02.778524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.011 [2024-05-13 18:28:02.778552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.578 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:47.578 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:14:47.578 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.578 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:47.578 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:47.839 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.839 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:48.098 [2024-05-13 18:28:03.795263] blobstore.c:4805:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:48.098 [2024-05-13 18:28:03.795663] blobstore.c:4752:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:48.098 [2024-05-13 18:28:03.795859] blobstore.c:4752:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 22ecf5f2-f3b7-4d08-a5db-5e45536be497 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=22ecf5f2-f3b7-4d08-a5db-5e45536be497 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:48.098 18:28:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:48.356 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22ecf5f2-f3b7-4d08-a5db-5e45536be497 -t 2000 00:14:48.615 [ 00:14:48.615 { 00:14:48.615 "aliases": [ 00:14:48.615 "lvs/lvol" 00:14:48.615 ], 00:14:48.615 "assigned_rate_limits": { 00:14:48.615 "r_mbytes_per_sec": 0, 00:14:48.615 "rw_ios_per_sec": 0, 00:14:48.615 "rw_mbytes_per_sec": 0, 00:14:48.615 "w_mbytes_per_sec": 0 00:14:48.615 }, 00:14:48.615 "block_size": 4096, 00:14:48.615 "claimed": false, 00:14:48.615 "driver_specific": { 00:14:48.615 "lvol": { 00:14:48.615 "base_bdev": "aio_bdev", 00:14:48.615 "clone": false, 00:14:48.615 "esnap_clone": false, 00:14:48.615 "lvol_store_uuid": "5cf01f0a-4de7-440d-89b6-6bcd63ffe234", 00:14:48.615 "num_allocated_clusters": 38, 00:14:48.615 "snapshot": false, 00:14:48.615 "thin_provision": false 00:14:48.615 } 00:14:48.615 }, 00:14:48.615 "name": "22ecf5f2-f3b7-4d08-a5db-5e45536be497", 00:14:48.615 "num_blocks": 38912, 00:14:48.615 "product_name": "Logical Volume", 00:14:48.615 "supported_io_types": { 00:14:48.615 "abort": false, 00:14:48.615 "compare": false, 00:14:48.615 "compare_and_write": false, 00:14:48.615 "flush": false, 00:14:48.615 "nvme_admin": false, 00:14:48.615 "nvme_io": false, 00:14:48.615 "read": true, 00:14:48.615 "reset": true, 00:14:48.615 "unmap": true, 00:14:48.615 "write": true, 00:14:48.615 "write_zeroes": true 00:14:48.615 }, 00:14:48.615 "uuid": "22ecf5f2-f3b7-4d08-a5db-5e45536be497", 00:14:48.615 "zoned": false 00:14:48.615 } 00:14:48.615 ] 00:14:48.615 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:48.616 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:14:48.616 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:48.875 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:14:48.875 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:48.875 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:14:49.133 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:14:49.133 18:28:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:49.392 [2024-05-13 18:28:05.108510] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:49.392 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:49.650 2024/05/13 18:28:05 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5cf01f0a-4de7-440d-89b6-6bcd63ffe234], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:49.650 request: 00:14:49.650 { 00:14:49.650 "method": "bdev_lvol_get_lvstores", 00:14:49.650 "params": { 00:14:49.650 "uuid": "5cf01f0a-4de7-440d-89b6-6bcd63ffe234" 00:14:49.650 } 00:14:49.650 } 00:14:49.650 Got JSON-RPC error response 00:14:49.650 GoRPCClient: error on JSON-RPC call 00:14:49.650 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:14:49.650 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:49.650 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:49.650 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:49.650 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:49.909 aio_bdev 00:14:49.909 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 22ecf5f2-f3b7-4d08-a5db-5e45536be497 00:14:49.909 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=22ecf5f2-f3b7-4d08-a5db-5e45536be497 00:14:49.909 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:14:49.909 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:14:49.909 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:14:49.909 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:14:49.909 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:50.167 18:28:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 22ecf5f2-f3b7-4d08-a5db-5e45536be497 -t 2000 00:14:50.425 [ 00:14:50.425 { 00:14:50.425 "aliases": [ 00:14:50.425 "lvs/lvol" 00:14:50.425 ], 00:14:50.425 "assigned_rate_limits": { 00:14:50.425 "r_mbytes_per_sec": 0, 00:14:50.425 "rw_ios_per_sec": 0, 00:14:50.425 "rw_mbytes_per_sec": 0, 00:14:50.426 "w_mbytes_per_sec": 0 00:14:50.426 }, 00:14:50.426 "block_size": 4096, 00:14:50.426 "claimed": false, 00:14:50.426 "driver_specific": { 00:14:50.426 "lvol": { 00:14:50.426 "base_bdev": "aio_bdev", 00:14:50.426 "clone": false, 00:14:50.426 "esnap_clone": false, 00:14:50.426 "lvol_store_uuid": "5cf01f0a-4de7-440d-89b6-6bcd63ffe234", 00:14:50.426 "num_allocated_clusters": 38, 00:14:50.426 "snapshot": false, 00:14:50.426 "thin_provision": false 00:14:50.426 } 00:14:50.426 }, 00:14:50.426 "name": "22ecf5f2-f3b7-4d08-a5db-5e45536be497", 00:14:50.426 "num_blocks": 38912, 00:14:50.426 "product_name": "Logical Volume", 00:14:50.426 "supported_io_types": { 00:14:50.426 "abort": false, 00:14:50.426 "compare": false, 00:14:50.426 "compare_and_write": false, 00:14:50.426 "flush": false, 00:14:50.426 "nvme_admin": false, 00:14:50.426 "nvme_io": false, 00:14:50.426 "read": true, 00:14:50.426 "reset": true, 00:14:50.426 "unmap": true, 00:14:50.426 "write": true, 00:14:50.426 "write_zeroes": true 00:14:50.426 }, 00:14:50.426 "uuid": "22ecf5f2-f3b7-4d08-a5db-5e45536be497", 00:14:50.426 "zoned": false 00:14:50.426 } 00:14:50.426 ] 00:14:50.426 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:14:50.426 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:50.426 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:14:50.684 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:14:50.684 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:50.684 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:14:50.942 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:14:50.942 18:28:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 22ecf5f2-f3b7-4d08-a5db-5e45536be497 00:14:51.201 18:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5cf01f0a-4de7-440d-89b6-6bcd63ffe234 00:14:51.525 18:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:51.783 18:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:52.042 ************************************ 00:14:52.042 END TEST lvs_grow_dirty 00:14:52.042 ************************************ 00:14:52.042 00:14:52.042 real 0m21.136s 00:14:52.042 user 0m44.002s 00:14:52.042 sys 0m8.132s 00:14:52.042 18:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:52.042 18:28:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:52.301 nvmf_trace.0 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:52.301 rmmod nvme_tcp 00:14:52.301 rmmod nvme_fabrics 00:14:52.301 rmmod nvme_keyring 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 78869 ']' 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 78869 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 78869 ']' 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 78869 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78869 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:52.301 killing process with pid 78869 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78869' 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 78869 00:14:52.301 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 78869 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:52.868 00:14:52.868 real 0m42.194s 00:14:52.868 user 1m8.438s 00:14:52.868 sys 0m11.101s 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:52.868 18:28:08 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:14:52.868 ************************************ 00:14:52.868 END TEST nvmf_lvs_grow 00:14:52.868 ************************************ 00:14:52.869 18:28:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.869 18:28:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:52.869 18:28:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:52.869 18:28:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:52.869 ************************************ 00:14:52.869 START TEST nvmf_bdev_io_wait 00:14:52.869 ************************************ 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:52.869 * Looking for test storage... 00:14:52.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:52.869 Cannot find device "nvmf_tgt_br" 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # true 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:52.869 Cannot find device "nvmf_tgt_br2" 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # true 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:52.869 Cannot find device "nvmf_tgt_br" 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # true 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:52.869 Cannot find device "nvmf_tgt_br2" 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # true 00:14:52.869 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:53.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:53.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:53.127 18:28:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:53.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:14:53.127 00:14:53.127 --- 10.0.0.2 ping statistics --- 00:14:53.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.127 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:53.127 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:53.127 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:14:53.127 00:14:53.127 --- 10.0.0.3 ping statistics --- 00:14:53.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.127 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:14:53.127 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:53.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:14:53.385 00:14:53.385 --- 10.0.0.1 ping statistics --- 00:14:53.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.385 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@433 -- # return 0 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=79282 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 79282 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 79282 ']' 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:53.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:53.385 18:28:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:53.385 [2024-05-13 18:28:09.160327] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:53.385 [2024-05-13 18:28:09.160439] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.385 [2024-05-13 18:28:09.303351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.644 [2024-05-13 18:28:09.448051] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.644 [2024-05-13 18:28:09.448120] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.644 [2024-05-13 18:28:09.448135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.644 [2024-05-13 18:28:09.448146] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.644 [2024-05-13 18:28:09.448155] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.644 [2024-05-13 18:28:09.448321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.644 [2024-05-13 18:28:09.449055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.644 [2024-05-13 18:28:09.449147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.644 [2024-05-13 18:28:09.449157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.209 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.209 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:14:54.209 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.209 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:54.209 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 [2024-05-13 18:28:10.255208] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 Malloc0 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:54.468 [2024-05-13 18:28:10.306493] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:54.468 [2024-05-13 18:28:10.307062] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=79341 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=79343 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.468 { 00:14:54.468 "params": { 00:14:54.468 "name": "Nvme$subsystem", 00:14:54.468 "trtype": "$TEST_TRANSPORT", 00:14:54.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.468 "adrfam": "ipv4", 00:14:54.468 "trsvcid": "$NVMF_PORT", 00:14:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.468 "hdgst": ${hdgst:-false}, 00:14:54.468 "ddgst": ${ddgst:-false} 00:14:54.468 }, 00:14:54.468 "method": "bdev_nvme_attach_controller" 00:14:54.468 } 00:14:54.468 EOF 00:14:54.468 )") 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=79345 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.468 { 00:14:54.468 "params": { 00:14:54.468 "name": "Nvme$subsystem", 00:14:54.468 "trtype": "$TEST_TRANSPORT", 00:14:54.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.468 "adrfam": "ipv4", 00:14:54.468 "trsvcid": "$NVMF_PORT", 00:14:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.468 "hdgst": ${hdgst:-false}, 00:14:54.468 "ddgst": ${ddgst:-false} 00:14:54.468 }, 00:14:54.468 "method": "bdev_nvme_attach_controller" 00:14:54.468 } 00:14:54.468 EOF 00:14:54.468 )") 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.468 "params": { 00:14:54.468 "name": "Nvme1", 00:14:54.468 "trtype": "tcp", 00:14:54.468 "traddr": "10.0.0.2", 00:14:54.468 "adrfam": "ipv4", 00:14:54.468 "trsvcid": "4420", 00:14:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.468 "hdgst": false, 00:14:54.468 "ddgst": false 00:14:54.468 }, 00:14:54.468 "method": "bdev_nvme_attach_controller" 00:14:54.468 }' 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.468 { 00:14:54.468 "params": { 00:14:54.468 "name": "Nvme$subsystem", 00:14:54.468 "trtype": "$TEST_TRANSPORT", 00:14:54.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.468 "adrfam": "ipv4", 00:14:54.468 "trsvcid": "$NVMF_PORT", 00:14:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.468 "hdgst": ${hdgst:-false}, 00:14:54.468 "ddgst": ${ddgst:-false} 00:14:54.468 }, 00:14:54.468 "method": "bdev_nvme_attach_controller" 00:14:54.468 } 00:14:54.468 EOF 00:14:54.468 )") 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.468 "params": { 00:14:54.468 "name": "Nvme1", 00:14:54.468 "trtype": "tcp", 00:14:54.468 "traddr": "10.0.0.2", 00:14:54.468 "adrfam": "ipv4", 00:14:54.468 "trsvcid": "4420", 00:14:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.468 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.468 "hdgst": false, 00:14:54.468 "ddgst": false 00:14:54.468 }, 00:14:54.468 "method": "bdev_nvme_attach_controller" 00:14:54.468 }' 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.468 { 00:14:54.468 "params": { 00:14:54.468 "name": "Nvme$subsystem", 00:14:54.468 "trtype": "$TEST_TRANSPORT", 00:14:54.468 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.468 "adrfam": "ipv4", 00:14:54.468 "trsvcid": "$NVMF_PORT", 00:14:54.468 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.468 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.468 "hdgst": ${hdgst:-false}, 00:14:54.468 "ddgst": ${ddgst:-false} 00:14:54.468 }, 00:14:54.468 "method": "bdev_nvme_attach_controller" 00:14:54.468 } 00:14:54.468 EOF 00:14:54.468 )") 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=79347 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.468 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.469 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.469 "params": { 00:14:54.469 "name": "Nvme1", 00:14:54.469 "trtype": "tcp", 00:14:54.469 "traddr": "10.0.0.2", 00:14:54.469 "adrfam": "ipv4", 00:14:54.469 "trsvcid": "4420", 00:14:54.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.469 "hdgst": false, 00:14:54.469 "ddgst": false 00:14:54.469 }, 00:14:54.469 "method": "bdev_nvme_attach_controller" 00:14:54.469 }' 00:14:54.469 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:14:54.469 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:14:54.469 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.469 "params": { 00:14:54.469 "name": "Nvme1", 00:14:54.469 "trtype": "tcp", 00:14:54.469 "traddr": "10.0.0.2", 00:14:54.469 "adrfam": "ipv4", 00:14:54.469 "trsvcid": "4420", 00:14:54.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:54.469 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:54.469 "hdgst": false, 00:14:54.469 "ddgst": false 00:14:54.469 }, 00:14:54.469 "method": "bdev_nvme_attach_controller" 00:14:54.469 }' 00:14:54.469 18:28:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 79341 00:14:54.469 [2024-05-13 18:28:10.405767] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:54.469 [2024-05-13 18:28:10.405874] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:54.469 [2024-05-13 18:28:10.406150] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:54.469 [2024-05-13 18:28:10.406212] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:54.727 [2024-05-13 18:28:10.415413] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:54.727 [2024-05-13 18:28:10.415482] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:54.727 [2024-05-13 18:28:10.415811] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:54.727 [2024-05-13 18:28:10.415886] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:54.727 [2024-05-13 18:28:10.615937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.986 [2024-05-13 18:28:10.697782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.986 [2024-05-13 18:28:10.713536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:54.986 [2024-05-13 18:28:10.760915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.986 [2024-05-13 18:28:10.799196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:54.986 [2024-05-13 18:28:10.863732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:54.986 Running I/O for 1 seconds... 00:14:54.986 [2024-05-13 18:28:10.877484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.244 Running I/O for 1 seconds... 00:14:55.244 [2024-05-13 18:28:10.991918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:55.244 Running I/O for 1 seconds... 00:14:55.244 Running I/O for 1 seconds... 00:14:56.181 00:14:56.181 Latency(us) 00:14:56.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.181 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:56.181 Nvme1n1 : 1.02 7398.77 28.90 0.00 0.00 17150.27 9294.20 27167.65 00:14:56.181 =================================================================================================================== 00:14:56.181 Total : 7398.77 28.90 0.00 0.00 17150.27 9294.20 27167.65 00:14:56.181 00:14:56.181 Latency(us) 00:14:56.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.181 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:56.181 Nvme1n1 : 1.00 201468.91 786.99 0.00 0.00 633.10 275.55 1050.07 00:14:56.181 =================================================================================================================== 00:14:56.181 Total : 201468.91 786.99 0.00 0.00 633.10 275.55 1050.07 00:14:56.181 00:14:56.181 Latency(us) 00:14:56.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.181 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:56.181 Nvme1n1 : 1.01 8283.31 32.36 0.00 0.00 15370.03 10307.03 27048.49 00:14:56.181 =================================================================================================================== 00:14:56.181 Total : 8283.31 32.36 0.00 0.00 15370.03 10307.03 27048.49 00:14:56.440 00:14:56.440 Latency(us) 00:14:56.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.440 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:56.440 Nvme1n1 : 1.00 7720.92 30.16 0.00 0.00 16532.12 4825.83 43134.60 00:14:56.440 =================================================================================================================== 00:14:56.440 Total : 7720.92 30.16 0.00 0.00 16532.12 4825.83 43134.60 00:14:56.440 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 79343 00:14:56.698 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 79345 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 79347 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:56.699 rmmod nvme_tcp 00:14:56.699 rmmod nvme_fabrics 00:14:56.699 rmmod nvme_keyring 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 79282 ']' 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 79282 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 79282 ']' 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 79282 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79282 00:14:56.699 killing process with pid 79282 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79282' 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 79282 00:14:56.699 [2024-05-13 18:28:12.548948] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:56.699 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 79282 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:56.957 00:14:56.957 real 0m4.245s 00:14:56.957 user 0m18.613s 00:14:56.957 sys 0m1.968s 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:56.957 ************************************ 00:14:56.957 18:28:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:14:56.957 END TEST nvmf_bdev_io_wait 00:14:56.957 ************************************ 00:14:56.957 18:28:12 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:56.957 18:28:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:56.957 18:28:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:56.957 18:28:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:57.216 ************************************ 00:14:57.216 START TEST nvmf_queue_depth 00:14:57.216 ************************************ 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:57.216 * Looking for test storage... 00:14:57.216 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:57.216 18:28:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:57.216 Cannot find device "nvmf_tgt_br" 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@155 -- # true 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:57.216 Cannot find device "nvmf_tgt_br2" 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@156 -- # true 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:57.216 Cannot find device "nvmf_tgt_br" 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@158 -- # true 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:57.216 Cannot find device "nvmf_tgt_br2" 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@159 -- # true 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:57.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:57.216 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:14:57.216 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:57.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:57.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:14:57.476 00:14:57.476 --- 10.0.0.2 ping statistics --- 00:14:57.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.476 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:57.476 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:57.476 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:14:57.476 00:14:57.476 --- 10.0.0.3 ping statistics --- 00:14:57.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.476 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:57.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:57.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:57.476 00:14:57.476 --- 10.0.0.1 ping statistics --- 00:14:57.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:57.476 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@433 -- # return 0 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:57.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=79577 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 79577 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 79577 ']' 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:57.476 18:28:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:57.476 [2024-05-13 18:28:13.413530] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:57.476 [2024-05-13 18:28:13.414271] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.734 [2024-05-13 18:28:13.552978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.991 [2024-05-13 18:28:13.682574] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.992 [2024-05-13 18:28:13.682870] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.992 [2024-05-13 18:28:13.683001] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.992 [2024-05-13 18:28:13.683135] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.992 [2024-05-13 18:28:13.683170] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.992 [2024-05-13 18:28:13.683273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.592 [2024-05-13 18:28:14.463661] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:58.592 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.593 Malloc0 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.593 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.593 [2024-05-13 18:28:14.528151] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:58.593 [2024-05-13 18:28:14.528375] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=79627 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 79627 /var/tmp/bdevperf.sock 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 79627 ']' 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:58.851 18:28:14 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:14:58.851 [2024-05-13 18:28:14.593052] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:14:58.851 [2024-05-13 18:28:14.593388] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79627 ] 00:14:58.851 [2024-05-13 18:28:14.735846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.109 [2024-05-13 18:28:14.874910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.042 18:28:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:00.042 18:28:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:15:00.042 18:28:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:00.042 18:28:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.042 18:28:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:00.042 NVMe0n1 00:15:00.042 18:28:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.042 18:28:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:00.042 Running I/O for 10 seconds... 00:15:10.016 00:15:10.016 Latency(us) 00:15:10.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.016 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:10.016 Verification LBA range: start 0x0 length 0x4000 00:15:10.016 NVMe0n1 : 10.13 8578.96 33.51 0.00 0.00 118742.87 28955.00 119156.36 00:15:10.016 =================================================================================================================== 00:15:10.016 Total : 8578.96 33.51 0.00 0.00 118742.87 28955.00 119156.36 00:15:10.016 0 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 79627 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 79627 ']' 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 79627 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79627 00:15:10.274 killing process with pid 79627 00:15:10.274 Received shutdown signal, test time was about 10.000000 seconds 00:15:10.274 00:15:10.274 Latency(us) 00:15:10.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:10.274 =================================================================================================================== 00:15:10.274 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79627' 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 79627 00:15:10.274 18:28:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 79627 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:10.533 rmmod nvme_tcp 00:15:10.533 rmmod nvme_fabrics 00:15:10.533 rmmod nvme_keyring 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 79577 ']' 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 79577 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 79577 ']' 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 79577 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79577 00:15:10.533 killing process with pid 79577 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:10.533 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:10.534 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79577' 00:15:10.534 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 79577 00:15:10.534 [2024-05-13 18:28:26.387334] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:10.534 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 79577 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:10.792 00:15:10.792 real 0m13.809s 00:15:10.792 user 0m23.883s 00:15:10.792 sys 0m2.083s 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:10.792 ************************************ 00:15:10.792 END TEST nvmf_queue_depth 00:15:10.792 ************************************ 00:15:10.792 18:28:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:15:11.051 18:28:26 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:11.051 18:28:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:11.051 18:28:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:11.051 18:28:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:11.051 ************************************ 00:15:11.051 START TEST nvmf_target_multipath 00:15:11.051 ************************************ 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:11.051 * Looking for test storage... 00:15:11.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:11.051 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:11.052 Cannot find device "nvmf_tgt_br" 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@155 -- # true 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:11.052 Cannot find device "nvmf_tgt_br2" 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@156 -- # true 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:11.052 Cannot find device "nvmf_tgt_br" 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@158 -- # true 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:11.052 Cannot find device "nvmf_tgt_br2" 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@159 -- # true 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:11.052 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:11.309 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:11.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.309 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:15:11.309 18:28:26 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:11.309 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:11.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:15:11.309 00:15:11.309 --- 10.0.0.2 ping statistics --- 00:15:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.309 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:11.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:11.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:15:11.309 00:15:11.309 --- 10.0.0.3 ping statistics --- 00:15:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.309 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:11.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:15:11.309 00:15:11.309 --- 10.0.0.1 ping statistics --- 00:15:11.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.309 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@433 -- # return 0 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@481 -- # nvmfpid=79957 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@482 -- # waitforlisten 79957 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@827 -- # '[' -z 79957 ']' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:11.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:11.309 18:28:27 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:11.566 [2024-05-13 18:28:27.306226] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:11.566 [2024-05-13 18:28:27.306348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.566 [2024-05-13 18:28:27.448515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:11.824 [2024-05-13 18:28:27.572704] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.824 [2024-05-13 18:28:27.572767] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.824 [2024-05-13 18:28:27.572796] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.824 [2024-05-13 18:28:27.572809] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.824 [2024-05-13 18:28:27.572816] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.824 [2024-05-13 18:28:27.572975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:11.824 [2024-05-13 18:28:27.573676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:11.824 [2024-05-13 18:28:27.573830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:11.824 [2024-05-13 18:28:27.573835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.389 18:28:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:12.389 18:28:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@860 -- # return 0 00:15:12.389 18:28:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:12.389 18:28:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:12.389 18:28:28 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:12.647 18:28:28 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:12.647 18:28:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:12.904 [2024-05-13 18:28:28.614805] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.904 18:28:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:13.161 Malloc0 00:15:13.161 18:28:28 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:13.419 18:28:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:13.677 18:28:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.936 [2024-05-13 18:28:29.692649] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:13.936 [2024-05-13 18:28:29.692954] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.936 18:28:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:14.195 [2024-05-13 18:28:29.929118] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:14.195 18:28:29 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:14.454 18:28:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:14.454 18:28:30 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:14.454 18:28:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1194 -- # local i=0 00:15:14.454 18:28:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:14.454 18:28:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:14.454 18:28:30 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # sleep 2 00:15:16.981 18:28:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:15:16.981 18:28:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:15:16.981 18:28:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:15:16.981 18:28:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:15:16.981 18:28:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:15:16.981 18:28:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # return 0 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=80095 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:16.982 18:28:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:15:16.982 [global] 00:15:16.982 thread=1 00:15:16.982 invalidate=1 00:15:16.982 rw=randrw 00:15:16.982 time_based=1 00:15:16.982 runtime=6 00:15:16.982 ioengine=libaio 00:15:16.982 direct=1 00:15:16.982 bs=4096 00:15:16.982 iodepth=128 00:15:16.982 norandommap=0 00:15:16.982 numjobs=1 00:15:16.982 00:15:16.982 verify_dump=1 00:15:16.982 verify_backlog=512 00:15:16.982 verify_state_save=0 00:15:16.982 do_verify=1 00:15:16.982 verify=crc32c-intel 00:15:16.982 [job0] 00:15:16.982 filename=/dev/nvme0n1 00:15:16.982 Could not set queue depth (nvme0n1) 00:15:16.982 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:16.982 fio-3.35 00:15:16.982 Starting 1 thread 00:15:17.548 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:17.807 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:18.065 18:28:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:19.031 18:28:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:19.031 18:28:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.031 18:28:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.031 18:28:34 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:19.290 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:19.549 18:28:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:20.935 18:28:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:20.935 18:28:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:20.935 18:28:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:20.935 18:28:36 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 80095 00:15:22.895 00:15:22.895 job0: (groupid=0, jobs=1): err= 0: pid=80121: Mon May 13 18:28:38 2024 00:15:22.895 read: IOPS=10.6k, BW=41.5MiB/s (43.5MB/s)(249MiB/6003msec) 00:15:22.895 slat (usec): min=3, max=5132, avg=53.49, stdev=238.51 00:15:22.895 clat (usec): min=395, max=20022, avg=8184.69, stdev=1519.93 00:15:22.895 lat (usec): min=423, max=20038, avg=8238.18, stdev=1531.16 00:15:22.895 clat percentiles (usec): 00:15:22.895 | 1.00th=[ 5014], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7242], 00:15:22.895 | 30.00th=[ 7439], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8225], 00:15:22.895 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[10028], 95.00th=[11207], 00:15:22.895 | 99.00th=[13566], 99.50th=[14615], 99.90th=[15795], 99.95th=[16450], 00:15:22.895 | 99.99th=[17695] 00:15:22.895 bw ( KiB/s): min= 8008, max=28456, per=51.61%, avg=21942.73, stdev=6506.78, samples=11 00:15:22.895 iops : min= 2002, max= 7114, avg=5485.64, stdev=1626.67, samples=11 00:15:22.895 write: IOPS=6345, BW=24.8MiB/s (26.0MB/s)(131MiB/5286msec); 0 zone resets 00:15:22.895 slat (usec): min=4, max=2953, avg=67.35, stdev=176.83 00:15:22.895 clat (usec): min=342, max=16359, avg=7127.66, stdev=1333.11 00:15:22.895 lat (usec): min=420, max=16388, avg=7195.01, stdev=1342.62 00:15:22.896 clat percentiles (usec): 00:15:22.896 | 1.00th=[ 3884], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6325], 00:15:22.896 | 30.00th=[ 6587], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7177], 00:15:22.896 | 70.00th=[ 7439], 80.00th=[ 7767], 90.00th=[ 8717], 95.00th=[ 9634], 00:15:22.896 | 99.00th=[11863], 99.50th=[12649], 99.90th=[14222], 99.95th=[14746], 00:15:22.896 | 99.99th=[15533] 00:15:22.896 bw ( KiB/s): min= 8528, max=27944, per=86.75%, avg=22019.09, stdev=6230.89, samples=11 00:15:22.896 iops : min= 2132, max= 6986, avg=5504.73, stdev=1557.70, samples=11 00:15:22.896 lat (usec) : 500=0.01% 00:15:22.896 lat (msec) : 2=0.02%, 4=0.54%, 10=91.49%, 20=7.94%, 50=0.01% 00:15:22.896 cpu : usr=5.35%, sys=22.32%, ctx=6201, majf=0, minf=121 00:15:22.896 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:22.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.896 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:22.896 issued rwts: total=63809,33542,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.896 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:22.896 00:15:22.896 Run status group 0 (all jobs): 00:15:22.896 READ: bw=41.5MiB/s (43.5MB/s), 41.5MiB/s-41.5MiB/s (43.5MB/s-43.5MB/s), io=249MiB (261MB), run=6003-6003msec 00:15:22.896 WRITE: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=131MiB (137MB), run=5286-5286msec 00:15:22.896 00:15:22.896 Disk stats (read/write): 00:15:22.896 nvme0n1: ios=62700/33030, merge=0/0, ticks=484054/220980, in_queue=705034, util=98.68% 00:15:22.896 18:28:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:23.153 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:23.411 18:28:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:24.354 18:28:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:24.616 18:28:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:24.616 18:28:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:24.616 18:28:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:15:24.616 18:28:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=80248 00:15:24.616 18:28:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:24.616 18:28:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:15:24.616 [global] 00:15:24.616 thread=1 00:15:24.616 invalidate=1 00:15:24.616 rw=randrw 00:15:24.616 time_based=1 00:15:24.616 runtime=6 00:15:24.616 ioengine=libaio 00:15:24.616 direct=1 00:15:24.616 bs=4096 00:15:24.616 iodepth=128 00:15:24.616 norandommap=0 00:15:24.616 numjobs=1 00:15:24.616 00:15:24.616 verify_dump=1 00:15:24.616 verify_backlog=512 00:15:24.616 verify_state_save=0 00:15:24.616 do_verify=1 00:15:24.616 verify=crc32c-intel 00:15:24.616 [job0] 00:15:24.616 filename=/dev/nvme0n1 00:15:24.616 Could not set queue depth (nvme0n1) 00:15:24.616 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:24.616 fio-3.35 00:15:24.616 Starting 1 thread 00:15:25.551 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:25.809 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:26.067 18:28:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:27.001 18:28:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:27.001 18:28:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:27.001 18:28:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:27.001 18:28:42 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:27.260 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:27.517 18:28:43 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:15:28.888 18:28:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:28.888 18:28:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:28.888 18:28:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:28.888 18:28:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 80248 00:15:30.787 00:15:30.787 job0: (groupid=0, jobs=1): err= 0: pid=80269: Mon May 13 18:28:46 2024 00:15:30.787 read: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(287MiB/6004msec) 00:15:30.787 slat (usec): min=4, max=4898, avg=42.27, stdev=207.33 00:15:30.787 clat (usec): min=414, max=15678, avg=7213.89, stdev=1564.34 00:15:30.787 lat (usec): min=440, max=15689, avg=7256.16, stdev=1583.03 00:15:30.787 clat percentiles (usec): 00:15:30.787 | 1.00th=[ 3458], 5.00th=[ 4424], 10.00th=[ 5014], 20.00th=[ 5866], 00:15:30.787 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[ 7373], 60.00th=[ 7570], 00:15:30.787 | 70.00th=[ 7898], 80.00th=[ 8356], 90.00th=[ 8979], 95.00th=[ 9372], 00:15:30.787 | 99.00th=[11600], 99.50th=[11994], 99.90th=[12780], 99.95th=[13566], 00:15:30.787 | 99.99th=[14353] 00:15:30.787 bw ( KiB/s): min= 3320, max=42792, per=53.12%, avg=25994.64, stdev=10752.71, samples=11 00:15:30.787 iops : min= 830, max=10698, avg=6498.64, stdev=2688.19, samples=11 00:15:30.787 write: IOPS=7514, BW=29.4MiB/s (30.8MB/s)(152MiB/5177msec); 0 zone resets 00:15:30.787 slat (usec): min=12, max=1692, avg=52.11, stdev=133.46 00:15:30.787 clat (usec): min=792, max=15152, avg=5961.36, stdev=1540.32 00:15:30.787 lat (usec): min=863, max=15177, avg=6013.47, stdev=1553.37 00:15:30.787 clat percentiles (usec): 00:15:30.787 | 1.00th=[ 2704], 5.00th=[ 3228], 10.00th=[ 3687], 20.00th=[ 4293], 00:15:30.787 | 30.00th=[ 5014], 40.00th=[ 6063], 50.00th=[ 6456], 60.00th=[ 6718], 00:15:30.787 | 70.00th=[ 6980], 80.00th=[ 7242], 90.00th=[ 7504], 95.00th=[ 7767], 00:15:30.787 | 99.00th=[ 9241], 99.50th=[10290], 99.90th=[12256], 99.95th=[12387], 00:15:30.787 | 99.99th=[13435] 00:15:30.787 bw ( KiB/s): min= 3000, max=42256, per=86.47%, avg=25992.45, stdev=10693.93, samples=11 00:15:30.787 iops : min= 750, max=10564, avg=6498.09, stdev=2673.49, samples=11 00:15:30.787 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:15:30.787 lat (msec) : 2=0.05%, 4=6.71%, 10=90.99%, 20=2.23% 00:15:30.787 cpu : usr=5.93%, sys=24.32%, ctx=7559, majf=0, minf=151 00:15:30.787 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:30.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.787 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.787 issued rwts: total=73453,38902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.787 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.787 00:15:30.787 Run status group 0 (all jobs): 00:15:30.787 READ: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=287MiB (301MB), run=6004-6004msec 00:15:30.787 WRITE: bw=29.4MiB/s (30.8MB/s), 29.4MiB/s-29.4MiB/s (30.8MB/s-30.8MB/s), io=152MiB (159MB), run=5177-5177msec 00:15:30.787 00:15:30.787 Disk stats (read/write): 00:15:30.787 nvme0n1: ios=72368/38550, merge=0/0, ticks=484434/210389, in_queue=694823, util=98.65% 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1215 -- # local i=0 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # return 0 00:15:30.787 18:28:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:31.044 18:28:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:31.044 18:28:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:31.044 18:28:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:31.044 18:28:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:15:31.045 18:28:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.045 18:28:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:15:31.045 18:28:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.045 18:28:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:15:31.045 18:28:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.045 18:28:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.045 rmmod nvme_tcp 00:15:31.303 rmmod nvme_fabrics 00:15:31.303 rmmod nvme_keyring 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n 79957 ']' 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@490 -- # killprocess 79957 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@946 -- # '[' -z 79957 ']' 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@950 -- # kill -0 79957 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # uname 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79957 00:15:31.303 killing process with pid 79957 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79957' 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@965 -- # kill 79957 00:15:31.303 [2024-05-13 18:28:47.060562] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:31.303 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@970 -- # wait 79957 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:31.561 ************************************ 00:15:31.561 END TEST nvmf_target_multipath 00:15:31.561 ************************************ 00:15:31.561 00:15:31.561 real 0m20.620s 00:15:31.561 user 1m20.548s 00:15:31.561 sys 0m6.633s 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:31.561 18:28:47 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:15:31.561 18:28:47 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:31.561 18:28:47 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:31.561 18:28:47 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:31.561 18:28:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:31.561 ************************************ 00:15:31.561 START TEST nvmf_zcopy 00:15:31.561 ************************************ 00:15:31.561 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:31.820 * Looking for test storage... 00:15:31.820 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:31.820 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:31.821 Cannot find device "nvmf_tgt_br" 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@155 -- # true 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:31.821 Cannot find device "nvmf_tgt_br2" 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@156 -- # true 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:31.821 Cannot find device "nvmf_tgt_br" 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@158 -- # true 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:31.821 Cannot find device "nvmf_tgt_br2" 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@159 -- # true 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:31.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:31.821 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:31.821 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:32.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.092 ms 00:15:32.080 00:15:32.080 --- 10.0.0.2 ping statistics --- 00:15:32.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.080 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:32.080 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:32.080 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:15:32.080 00:15:32.080 --- 10.0.0.3 ping statistics --- 00:15:32.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.080 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:32.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:32.080 00:15:32.080 --- 10.0.0.1 ping statistics --- 00:15:32.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.080 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@433 -- # return 0 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:32.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=80559 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 80559 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 80559 ']' 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:32.080 18:28:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:32.080 [2024-05-13 18:28:47.967429] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:32.080 [2024-05-13 18:28:47.968038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.339 [2024-05-13 18:28:48.103394] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.339 [2024-05-13 18:28:48.222292] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.339 [2024-05-13 18:28:48.222615] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.339 [2024-05-13 18:28:48.222772] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.339 [2024-05-13 18:28:48.222825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.339 [2024-05-13 18:28:48.222922] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.339 [2024-05-13 18:28:48.222982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.274 18:28:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:33.274 18:28:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:15:33.274 18:28:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:33.274 18:28:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:33.274 18:28:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 [2024-05-13 18:28:49.035491] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 [2024-05-13 18:28:49.051373] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:33.274 [2024-05-13 18:28:49.051632] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 malloc0 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:33.274 { 00:15:33.274 "params": { 00:15:33.274 "name": "Nvme$subsystem", 00:15:33.274 "trtype": "$TEST_TRANSPORT", 00:15:33.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:33.274 "adrfam": "ipv4", 00:15:33.274 "trsvcid": "$NVMF_PORT", 00:15:33.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:33.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:33.274 "hdgst": ${hdgst:-false}, 00:15:33.274 "ddgst": ${ddgst:-false} 00:15:33.274 }, 00:15:33.274 "method": "bdev_nvme_attach_controller" 00:15:33.274 } 00:15:33.274 EOF 00:15:33.274 )") 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:33.274 18:28:49 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:33.274 "params": { 00:15:33.274 "name": "Nvme1", 00:15:33.274 "trtype": "tcp", 00:15:33.274 "traddr": "10.0.0.2", 00:15:33.274 "adrfam": "ipv4", 00:15:33.274 "trsvcid": "4420", 00:15:33.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:33.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:33.274 "hdgst": false, 00:15:33.274 "ddgst": false 00:15:33.274 }, 00:15:33.274 "method": "bdev_nvme_attach_controller" 00:15:33.274 }' 00:15:33.274 [2024-05-13 18:28:49.145295] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:33.274 [2024-05-13 18:28:49.145429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80610 ] 00:15:33.535 [2024-05-13 18:28:49.284671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.535 [2024-05-13 18:28:49.415894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.797 Running I/O for 10 seconds... 00:15:43.770 00:15:43.770 Latency(us) 00:15:43.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.770 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:43.770 Verification LBA range: start 0x0 length 0x1000 00:15:43.770 Nvme1n1 : 10.01 6262.71 48.93 0.00 0.00 20373.41 2442.71 31695.59 00:15:43.770 =================================================================================================================== 00:15:43.770 Total : 6262.71 48.93 0.00 0.00 20373.41 2442.71 31695.59 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=80722 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:44.029 { 00:15:44.029 "params": { 00:15:44.029 "name": "Nvme$subsystem", 00:15:44.029 "trtype": "$TEST_TRANSPORT", 00:15:44.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:44.029 "adrfam": "ipv4", 00:15:44.029 "trsvcid": "$NVMF_PORT", 00:15:44.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:44.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:44.029 "hdgst": ${hdgst:-false}, 00:15:44.029 "ddgst": ${ddgst:-false} 00:15:44.029 }, 00:15:44.029 "method": "bdev_nvme_attach_controller" 00:15:44.029 } 00:15:44.029 EOF 00:15:44.029 )") 00:15:44.029 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:15:44.030 [2024-05-13 18:28:59.891801] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.030 [2024-05-13 18:28:59.891995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.030 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:15:44.030 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.030 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:15:44.030 18:28:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:44.030 "params": { 00:15:44.030 "name": "Nvme1", 00:15:44.030 "trtype": "tcp", 00:15:44.030 "traddr": "10.0.0.2", 00:15:44.030 "adrfam": "ipv4", 00:15:44.030 "trsvcid": "4420", 00:15:44.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:44.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:44.030 "hdgst": false, 00:15:44.030 "ddgst": false 00:15:44.030 }, 00:15:44.030 "method": "bdev_nvme_attach_controller" 00:15:44.030 }' 00:15:44.030 [2024-05-13 18:28:59.903791] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.030 [2024-05-13 18:28:59.903828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.030 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.030 [2024-05-13 18:28:59.915781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.030 [2024-05-13 18:28:59.915815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.030 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.030 [2024-05-13 18:28:59.927793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.030 [2024-05-13 18:28:59.927831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.030 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.030 [2024-05-13 18:28:59.939802] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.030 [2024-05-13 18:28:59.939844] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.030 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.030 [2024-05-13 18:28:59.950554] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:44.030 [2024-05-13 18:28:59.950702] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80722 ] 00:15:44.030 [2024-05-13 18:28:59.951802] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.030 [2024-05-13 18:28:59.951831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.030 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.030 [2024-05-13 18:28:59.963788] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.030 [2024-05-13 18:28:59.963820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.030 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.289 [2024-05-13 18:28:59.975797] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.289 [2024-05-13 18:28:59.975828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.289 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.289 [2024-05-13 18:28:59.987789] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.289 [2024-05-13 18:28:59.987820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.289 2024/05/13 18:28:59 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.289 [2024-05-13 18:28:59.999793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.289 [2024-05-13 18:28:59.999826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.289 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.289 [2024-05-13 18:29:00.011827] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.289 [2024-05-13 18:29:00.011866] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.289 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.289 [2024-05-13 18:29:00.023822] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.289 [2024-05-13 18:29:00.023858] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.289 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.289 [2024-05-13 18:29:00.035852] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.289 [2024-05-13 18:29:00.035902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.289 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.047831] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.047886] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.059823] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.059857] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.071877] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.071924] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.083889] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.083934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 [2024-05-13 18:29:00.086781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.095881] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.095917] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.107867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.107903] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.119883] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.119950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.131884] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.131919] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.143904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.143970] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.155908] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.155971] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.167892] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.167928] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.179895] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.179933] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.191927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.191980] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.203913] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.203955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.210793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.290 [2024-05-13 18:29:00.215890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.215935] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.290 [2024-05-13 18:29:00.227910] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.290 [2024-05-13 18:29:00.227948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.290 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.548 [2024-05-13 18:29:00.240015] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.548 [2024-05-13 18:29:00.240078] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.548 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.548 [2024-05-13 18:29:00.251980] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.548 [2024-05-13 18:29:00.252031] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.548 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.548 [2024-05-13 18:29:00.263950] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.548 [2024-05-13 18:29:00.263995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.548 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.548 [2024-05-13 18:29:00.275962] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.548 [2024-05-13 18:29:00.276011] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.548 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.548 [2024-05-13 18:29:00.287949] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.287993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.299965] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.300005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.311945] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.311978] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.323959] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.324002] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.335979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.336035] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.347942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.348007] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.359943] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.360005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.371948] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.371979] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.383934] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.383991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.396010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.396058] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 Running I/O for 5 seconds... 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.408038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.408111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.424102] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.424162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.439496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.439616] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.450647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.450706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.465128] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.465166] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.549 [2024-05-13 18:29:00.480505] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.549 [2024-05-13 18:29:00.480555] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.549 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.492521] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.492584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.509911] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.509973] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.524491] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.524545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.539438] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.539480] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.555735] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.555798] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.571314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.571381] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.587133] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.587184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.603221] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.603272] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.613306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.613360] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.628244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.628301] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.645110] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.645164] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.661496] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.661545] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.677859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.677955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.694610] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.694668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.710262] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.710314] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.726436] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.726501] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.742968] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.743009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.760447] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.760505] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:44.851 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:44.851 [2024-05-13 18:29:00.775242] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:44.851 [2024-05-13 18:29:00.775280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.790269] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.790306] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.804951] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.804996] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.821588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.821650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.837316] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.837371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.846834] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.846884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.861066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.861104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.876275] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.876316] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.131 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.131 [2024-05-13 18:29:00.891645] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.131 [2024-05-13 18:29:00.891683] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:00.907634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:00.907671] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:00.924155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:00.924193] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:00.940250] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:00.940288] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:00.958335] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:00.958372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:00.972923] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:00.972961] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:00.988412] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:00.988444] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:00 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:01.005871] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:01.005906] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:01.022942] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:01.022978] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:01.037994] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:01.038027] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:01.053407] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:01.053446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.132 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.132 [2024-05-13 18:29:01.071175] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.132 [2024-05-13 18:29:01.071219] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.087041] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.087082] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.097286] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.097335] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.111418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.111455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.127991] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.128030] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.144176] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.144214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.161030] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.161069] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.176404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.176440] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.193533] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.193583] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.209424] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.209494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.226452] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.226494] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.241257] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.241293] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.256627] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.256674] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.266918] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.266950] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.281719] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.281758] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.291919] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.291954] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.306330] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.306366] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.391 [2024-05-13 18:29:01.322621] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.391 [2024-05-13 18:29:01.322656] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.391 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.650 [2024-05-13 18:29:01.339718] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.650 [2024-05-13 18:29:01.339757] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.650 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.650 [2024-05-13 18:29:01.356297] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.650 [2024-05-13 18:29:01.356338] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.650 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.650 [2024-05-13 18:29:01.371673] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.650 [2024-05-13 18:29:01.371710] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.650 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.650 [2024-05-13 18:29:01.386088] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.650 [2024-05-13 18:29:01.386125] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.650 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.650 [2024-05-13 18:29:01.401695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.650 [2024-05-13 18:29:01.401728] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.650 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.650 [2024-05-13 18:29:01.420254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.650 [2024-05-13 18:29:01.420287] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.433466] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.433498] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.449531] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.449615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.467294] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.467347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.485243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.485280] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.500098] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.500132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.515915] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.515949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.532389] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.532422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.548516] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.548550] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.558882] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.558916] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.574289] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.574327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.651 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.651 [2024-05-13 18:29:01.590316] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.651 [2024-05-13 18:29:01.590355] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.600743] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.600779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.615108] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.615145] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.631534] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.631629] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.649155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.649208] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.664713] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.664748] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.681591] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.681652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.698335] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.698373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.713524] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.713559] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.728581] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.728644] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.746294] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.746327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.911 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.911 [2024-05-13 18:29:01.760290] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.911 [2024-05-13 18:29:01.760324] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.912 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.912 [2024-05-13 18:29:01.776655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.912 [2024-05-13 18:29:01.776700] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.912 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.912 [2024-05-13 18:29:01.792047] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.912 [2024-05-13 18:29:01.792088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.912 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.912 [2024-05-13 18:29:01.807306] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.912 [2024-05-13 18:29:01.807370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.912 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.912 [2024-05-13 18:29:01.823542] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.912 [2024-05-13 18:29:01.823592] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.912 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:45.912 [2024-05-13 18:29:01.839930] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:45.912 [2024-05-13 18:29:01.839968] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:45.912 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.856510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.856552] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.871486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.871526] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.886522] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.886560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.902086] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.902139] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.917583] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.917638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.928071] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.928120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.942435] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.942471] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.957417] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.957453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.972657] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.972692] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:01.988585] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:01.988633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:01 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:02.005323] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:02.005370] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:02.022288] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:02.022342] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:02.039553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:02.039616] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:02.056384] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:02.056419] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:02.073147] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:02.073198] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:02.089096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:02.089153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.170 [2024-05-13 18:29:02.106036] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.170 [2024-05-13 18:29:02.106088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.170 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.122205] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.122269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.138448] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.138497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.154838] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.154870] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.173279] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.173311] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.188460] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.188493] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.204944] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.204988] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.221125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.221184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.230573] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.230650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.247017] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.247049] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.257413] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.257446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.271857] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.271908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.288621] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.288680] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.303213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.303244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.318288] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.318322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.333436] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.333470] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.348243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.348276] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.430 [2024-05-13 18:29:02.363665] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.430 [2024-05-13 18:29:02.363698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.430 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.688 [2024-05-13 18:29:02.374132] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.688 [2024-05-13 18:29:02.374179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.688 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.688 [2024-05-13 18:29:02.388856] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.688 [2024-05-13 18:29:02.388891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.688 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.688 [2024-05-13 18:29:02.404429] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.688 [2024-05-13 18:29:02.404462] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.419297] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.419347] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.435228] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.435269] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.452755] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.452793] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.468640] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.468691] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.478927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.478960] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.494123] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.494170] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.509971] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.510014] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.519418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.519453] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.534434] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.534469] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.550484] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.550518] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.568262] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.568300] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.582995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.583046] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.594707] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.594741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.611732] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.611764] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.689 [2024-05-13 18:29:02.626634] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.689 [2024-05-13 18:29:02.626669] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.689 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.637605] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.637672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.651777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.651812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.666331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.666363] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.683327] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.683362] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.698636] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.698684] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.709328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.709383] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.723004] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.723038] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.947 [2024-05-13 18:29:02.738748] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.947 [2024-05-13 18:29:02.738779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.947 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.755629] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.755661] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.767290] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.767322] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.785287] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.785348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.799800] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.799835] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.814477] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.814539] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.829934] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.829997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.841730] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.841791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.858862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.858926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.873990] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.874041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:46.948 [2024-05-13 18:29:02.885779] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:46.948 [2024-05-13 18:29:02.885820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:46.948 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.207 [2024-05-13 18:29:02.902979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.207 [2024-05-13 18:29:02.903015] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.207 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.207 [2024-05-13 18:29:02.917570] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.207 [2024-05-13 18:29:02.917615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.207 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.207 [2024-05-13 18:29:02.935497] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.207 [2024-05-13 18:29:02.935546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.207 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.207 [2024-05-13 18:29:02.950084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.207 [2024-05-13 18:29:02.950133] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.207 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.207 [2024-05-13 18:29:02.965568] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.207 [2024-05-13 18:29:02.965642] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.207 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.207 [2024-05-13 18:29:02.976498] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.207 [2024-05-13 18:29:02.976597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.207 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.207 [2024-05-13 18:29:02.990584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.207 [2024-05-13 18:29:02.990643] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.207 2024/05/13 18:29:02 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.007259] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.007308] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.023565] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.023624] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.042033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.042084] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.056152] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.056186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.073418] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.073471] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.083599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.083659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.094243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.094291] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.105135] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.105170] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.118887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.118922] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.133936] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.133976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.208 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.208 [2024-05-13 18:29:03.149591] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.208 [2024-05-13 18:29:03.149639] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.166546] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.166610] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.182053] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.182091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.192337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.192371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.206770] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.206823] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.221807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.221850] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.237154] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.237188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.252654] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.252714] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.265291] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.265355] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.275259] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.275307] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.290226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.290290] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.300294] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.300345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.315058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.315107] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.332084] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.332142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.347153] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.347204] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.362377] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.362437] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.378635] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.378686] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.467 [2024-05-13 18:29:03.395896] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.467 [2024-05-13 18:29:03.395929] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.467 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.725 [2024-05-13 18:29:03.411183] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.725 [2024-05-13 18:29:03.411243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.725 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.725 [2024-05-13 18:29:03.426013] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.426062] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.440951] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.440986] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.457347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.457397] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.473873] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.473922] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.490329] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.490387] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.505893] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.505949] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.516166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.516214] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.531655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.531706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.549584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.549640] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.565395] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.565446] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.581905] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.581956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.597296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.597358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.613308] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.613358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.631193] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.631244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.645836] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.645882] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.726 [2024-05-13 18:29:03.660689] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.726 [2024-05-13 18:29:03.660734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.726 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.677067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.677101] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.692422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.692487] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.707616] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.707663] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.722696] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.722741] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.733096] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.733131] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.748503] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.748553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.764429] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.764485] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.774859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.774906] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.785402] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.785448] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.798910] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.798958] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.815408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.815458] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.830956] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.831005] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.845522] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.845596] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.862098] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.862191] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.876888] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.876937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.891514] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.891601] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.908932] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.908982] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:47.984 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:47.984 [2024-05-13 18:29:03.923582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:47.984 [2024-05-13 18:29:03.923639] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:03.941012] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:03.941047] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:03.956399] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:03.956450] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:03.972117] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:03.972165] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:03.987349] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:03.987398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:03 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.003783] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.003842] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.018642] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.018698] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.033807] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.033860] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.050038] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.050092] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.067470] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.067522] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.082469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.082534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.097711] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.097767] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.114407] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.114455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.129795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.129839] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.139847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.139902] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.153945] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.249 [2024-05-13 18:29:04.153986] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.249 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.249 [2024-05-13 18:29:04.169999] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.250 [2024-05-13 18:29:04.170055] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.250 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.250 [2024-05-13 18:29:04.185336] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.250 [2024-05-13 18:29:04.185390] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.200640] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.200695] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.217466] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.217516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.232401] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.232445] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.243104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.243143] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.257620] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.257673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.273213] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.273268] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.283411] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.283465] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.298564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.298639] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.315348] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.315414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.331385] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.331434] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.349810] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.349859] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.364289] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.364350] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.379405] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.379458] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.396130] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.396192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.412947] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.413010] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.428469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.428549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.440536] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.440604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.528 [2024-05-13 18:29:04.458545] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.528 [2024-05-13 18:29:04.458634] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.528 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.473473] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.473537] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.484150] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.484192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.497742] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.497791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.513206] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.513255] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.523442] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.523482] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.537229] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.537281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.552599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.552650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.568676] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.568727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.579010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.579061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.593551] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.593611] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.610453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.610509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.624911] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.786 [2024-05-13 18:29:04.624960] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.786 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.786 [2024-05-13 18:29:04.640106] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.787 [2024-05-13 18:29:04.640158] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.787 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.787 [2024-05-13 18:29:04.655821] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.787 [2024-05-13 18:29:04.655891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.787 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.787 [2024-05-13 18:29:04.666207] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.787 [2024-05-13 18:29:04.666262] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.787 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.787 [2024-05-13 18:29:04.680700] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.787 [2024-05-13 18:29:04.680748] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.787 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.787 [2024-05-13 18:29:04.697012] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.787 [2024-05-13 18:29:04.697060] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.787 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.787 [2024-05-13 18:29:04.714214] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.787 [2024-05-13 18:29:04.714274] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:48.787 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:48.787 [2024-05-13 18:29:04.729127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:48.787 [2024-05-13 18:29:04.729174] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.744395] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.744443] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.762140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.762201] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.777706] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.777762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.794890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.794940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.811858] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.811904] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.828842] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.828891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.844963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.845000] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.862201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.862237] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.877186] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.877222] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.891425] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.891472] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.908032] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.908076] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.925501] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.925538] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.939865] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.939901] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.950009] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.950043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.964381] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.964416] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.045 [2024-05-13 18:29:04.979931] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.045 [2024-05-13 18:29:04.979990] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.045 2024/05/13 18:29:04 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:04.997390] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:04.997425] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.012077] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.012116] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.028658] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.028722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.044046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.044097] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.058814] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.058851] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.075109] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.075147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.091127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.091162] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.108958] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.108993] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.123239] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.123286] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.138144] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.138191] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.154274] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.154334] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.171734] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.171781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.186274] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.186321] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.201484] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.201519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.216636] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.216670] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.304 [2024-05-13 18:29:05.233204] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.304 [2024-05-13 18:29:05.233250] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.304 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.249317] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.249352] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.266615] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.266648] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.282582] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.282615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.299522] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.299558] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.315882] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.315928] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.333359] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.333401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.348127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.348165] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.363515] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.363553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.563 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.563 [2024-05-13 18:29:05.373467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.563 [2024-05-13 18:29:05.373502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.388559] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.388650] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.405178] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.405229] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 00:15:49.564 Latency(us) 00:15:49.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.564 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:49.564 Nvme1n1 : 5.01 11857.69 92.64 0.00 0.00 10780.54 4647.10 22163.08 00:15:49.564 =================================================================================================================== 00:15:49.564 Total : 11857.69 92.64 0.00 0.00 10780.54 4647.10 22163.08 00:15:49.564 [2024-05-13 18:29:05.416871] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.416915] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.428858] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.428890] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.440885] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.440925] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.452898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.452937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.464903] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.464947] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.476901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.476953] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.488903] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.488945] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.564 [2024-05-13 18:29:05.500927] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.564 [2024-05-13 18:29:05.500974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.564 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.512919] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.512960] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.524911] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.524955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.536911] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.536952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.548904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.548939] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.560899] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.560933] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.572896] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.572926] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.584949] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.584995] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.596934] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.822 [2024-05-13 18:29:05.596976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.822 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.822 [2024-05-13 18:29:05.608907] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.823 [2024-05-13 18:29:05.608935] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.823 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.823 [2024-05-13 18:29:05.620906] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.823 [2024-05-13 18:29:05.620934] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.823 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.823 [2024-05-13 18:29:05.632940] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.823 [2024-05-13 18:29:05.632981] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.823 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.823 [2024-05-13 18:29:05.644940] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.823 [2024-05-13 18:29:05.644978] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.823 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.823 [2024-05-13 18:29:05.656947] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.823 [2024-05-13 18:29:05.656984] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.823 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.823 [2024-05-13 18:29:05.668951] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.823 [2024-05-13 18:29:05.668986] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.823 2024/05/13 18:29:05 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.823 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (80722) - No such process 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 80722 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.823 delay0 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.823 18:29:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:50.081 [2024-05-13 18:29:05.869820] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:56.676 Initializing NVMe Controllers 00:15:56.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:56.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:56.676 Initialization complete. Launching workers. 00:15:56.676 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 100 00:15:56.676 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 387, failed to submit 33 00:15:56.676 success 242, unsuccess 145, failed 0 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.676 18:29:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.676 rmmod nvme_tcp 00:15:56.676 rmmod nvme_fabrics 00:15:56.676 rmmod nvme_keyring 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 80559 ']' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 80559 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 80559 ']' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 80559 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80559 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:56.676 killing process with pid 80559 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80559' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 80559 00:15:56.676 [2024-05-13 18:29:12.055188] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 80559 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:56.676 00:15:56.676 real 0m24.927s 00:15:56.676 user 0m40.260s 00:15:56.676 sys 0m6.782s 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:56.676 18:29:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:15:56.676 ************************************ 00:15:56.676 END TEST nvmf_zcopy 00:15:56.676 ************************************ 00:15:56.676 18:29:12 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:56.676 18:29:12 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:56.676 18:29:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:56.676 18:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:56.676 ************************************ 00:15:56.676 START TEST nvmf_nmic 00:15:56.676 ************************************ 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:56.676 * Looking for test storage... 00:15:56.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:56.676 18:29:12 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@432 -- # nvmf_veth_init 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:56.677 Cannot find device "nvmf_tgt_br" 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@155 -- # true 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.677 Cannot find device "nvmf_tgt_br2" 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@156 -- # true 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:56.677 Cannot find device "nvmf_tgt_br" 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@158 -- # true 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:56.677 Cannot find device "nvmf_tgt_br2" 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@159 -- # true 00:15:56.677 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.937 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:56.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:56.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:15:56.937 00:15:56.937 --- 10.0.0.2 ping statistics --- 00:15:56.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.937 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:56.937 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:56.937 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:15:56.937 00:15:56.937 --- 10.0.0.3 ping statistics --- 00:15:56.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.937 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:56.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:56.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:15:56.937 00:15:56.937 --- 10.0.0.1 ping statistics --- 00:15:56.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:56.937 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@433 -- # return 0 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:56.937 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=81043 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 81043 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 81043 ']' 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:57.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:57.198 18:29:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:57.198 [2024-05-13 18:29:12.946138] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:15:57.198 [2024-05-13 18:29:12.946248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.198 [2024-05-13 18:29:13.084235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.457 [2024-05-13 18:29:13.216888] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.457 [2024-05-13 18:29:13.216951] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.457 [2024-05-13 18:29:13.216965] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.457 [2024-05-13 18:29:13.216975] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.457 [2024-05-13 18:29:13.216985] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.457 [2024-05-13 18:29:13.217106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.457 [2024-05-13 18:29:13.218315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.457 [2024-05-13 18:29:13.218480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.457 [2024-05-13 18:29:13.218490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.392 18:29:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:58.392 18:29:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:15:58.392 18:29:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:58.392 18:29:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:58.392 18:29:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 [2024-05-13 18:29:14.028022] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 Malloc0 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 [2024-05-13 18:29:14.108493] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:58.392 [2024-05-13 18:29:14.108764] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:58.392 test case1: single bdev can't be used in multiple subsystems 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.392 [2024-05-13 18:29:14.136625] bdev.c:8011:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:58.392 [2024-05-13 18:29:14.136674] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:58.392 [2024-05-13 18:29:14.136686] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:58.392 2024/05/13 18:29:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:58.392 request: 00:15:58.392 { 00:15:58.392 "method": "nvmf_subsystem_add_ns", 00:15:58.392 "params": { 00:15:58.392 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:58.392 "namespace": { 00:15:58.392 "bdev_name": "Malloc0", 00:15:58.392 "no_auto_visible": false 00:15:58.392 } 00:15:58.392 } 00:15:58.392 } 00:15:58.392 Got JSON-RPC error response 00:15:58.392 GoRPCClient: error on JSON-RPC call 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:58.392 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:58.392 Adding namespace failed - expected result. 00:15:58.392 test case2: host connect to nvmf target in multiple paths 00:15:58.393 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:58.393 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:58.393 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.393 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:15:58.393 [2024-05-13 18:29:14.148779] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:58.393 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.393 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.393 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:58.651 18:29:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:58.651 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:15:58.651 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.651 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:15:58.651 18:29:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:16:01.180 18:29:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:01.180 18:29:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:01.180 18:29:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:01.180 18:29:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:16:01.180 18:29:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:01.180 18:29:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:16:01.180 18:29:16 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:01.180 [global] 00:16:01.180 thread=1 00:16:01.180 invalidate=1 00:16:01.180 rw=write 00:16:01.180 time_based=1 00:16:01.180 runtime=1 00:16:01.180 ioengine=libaio 00:16:01.180 direct=1 00:16:01.180 bs=4096 00:16:01.180 iodepth=1 00:16:01.180 norandommap=0 00:16:01.180 numjobs=1 00:16:01.180 00:16:01.180 verify_dump=1 00:16:01.180 verify_backlog=512 00:16:01.180 verify_state_save=0 00:16:01.180 do_verify=1 00:16:01.180 verify=crc32c-intel 00:16:01.180 [job0] 00:16:01.180 filename=/dev/nvme0n1 00:16:01.180 Could not set queue depth (nvme0n1) 00:16:01.180 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:01.180 fio-3.35 00:16:01.180 Starting 1 thread 00:16:02.113 00:16:02.113 job0: (groupid=0, jobs=1): err= 0: pid=81154: Mon May 13 18:29:17 2024 00:16:02.113 read: IOPS=3356, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1001msec) 00:16:02.113 slat (nsec): min=13142, max=46342, avg=15647.53, stdev=2605.26 00:16:02.113 clat (usec): min=126, max=407, avg=143.61, stdev= 9.03 00:16:02.113 lat (usec): min=140, max=422, avg=159.26, stdev= 9.42 00:16:02.113 clat percentiles (usec): 00:16:02.113 | 1.00th=[ 131], 5.00th=[ 135], 10.00th=[ 135], 20.00th=[ 139], 00:16:02.113 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:16:02.113 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 159], 00:16:02.113 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 198], 00:16:02.113 | 99.99th=[ 408] 00:16:02.113 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:02.114 slat (usec): min=19, max=168, avg=23.45, stdev= 5.42 00:16:02.114 clat (usec): min=83, max=365, avg=102.86, stdev= 8.59 00:16:02.114 lat (usec): min=110, max=386, avg=126.31, stdev=11.03 00:16:02.114 clat percentiles (usec): 00:16:02.114 | 1.00th=[ 93], 5.00th=[ 95], 10.00th=[ 96], 20.00th=[ 98], 00:16:02.114 | 30.00th=[ 99], 40.00th=[ 100], 50.00th=[ 102], 60.00th=[ 103], 00:16:02.114 | 70.00th=[ 105], 80.00th=[ 108], 90.00th=[ 112], 95.00th=[ 116], 00:16:02.114 | 99.00th=[ 125], 99.50th=[ 131], 99.90th=[ 190], 99.95th=[ 273], 00:16:02.114 | 99.99th=[ 367] 00:16:02.114 bw ( KiB/s): min=15536, max=15536, per=100.00%, avg=15536.00, stdev= 0.00, samples=1 00:16:02.114 iops : min= 3884, max= 3884, avg=3884.00, stdev= 0.00, samples=1 00:16:02.114 lat (usec) : 100=19.67%, 250=80.29%, 500=0.04% 00:16:02.114 cpu : usr=2.20%, sys=10.50%, ctx=6945, majf=0, minf=2 00:16:02.114 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.114 issued rwts: total=3360,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.114 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.114 00:16:02.114 Run status group 0 (all jobs): 00:16:02.114 READ: bw=13.1MiB/s (13.7MB/s), 13.1MiB/s-13.1MiB/s (13.7MB/s-13.7MB/s), io=13.1MiB (13.8MB), run=1001-1001msec 00:16:02.114 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:02.114 00:16:02.114 Disk stats (read/write): 00:16:02.114 nvme0n1: ios=3122/3143, merge=0/0, ticks=478/355, in_queue=833, util=91.18% 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:02.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:02.114 18:29:17 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:16:02.114 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:02.114 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:16:02.114 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:02.114 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:02.114 rmmod nvme_tcp 00:16:02.114 rmmod nvme_fabrics 00:16:02.114 rmmod nvme_keyring 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 81043 ']' 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 81043 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 81043 ']' 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 81043 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81043 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:02.373 killing process with pid 81043 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81043' 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 81043 00:16:02.373 [2024-05-13 18:29:18.098938] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:02.373 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 81043 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:02.632 00:16:02.632 real 0m6.011s 00:16:02.632 user 0m20.040s 00:16:02.632 sys 0m1.457s 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:02.632 18:29:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:16:02.632 ************************************ 00:16:02.632 END TEST nvmf_nmic 00:16:02.632 ************************************ 00:16:02.632 18:29:18 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:02.632 18:29:18 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:02.632 18:29:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:02.632 18:29:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:02.632 ************************************ 00:16:02.632 START TEST nvmf_fio_target 00:16:02.632 ************************************ 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:02.632 * Looking for test storage... 00:16:02.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.632 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:02.891 Cannot find device "nvmf_tgt_br" 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@155 -- # true 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.891 Cannot find device "nvmf_tgt_br2" 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@156 -- # true 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:02.891 Cannot find device "nvmf_tgt_br" 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@158 -- # true 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:02.891 Cannot find device "nvmf_tgt_br2" 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@159 -- # true 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:02.891 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:03.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:03.150 00:16:03.150 --- 10.0.0.2 ping statistics --- 00:16:03.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.150 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:03.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:03.150 00:16:03.150 --- 10.0.0.3 ping statistics --- 00:16:03.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.150 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:03.150 00:16:03.150 --- 10.0.0.1 ping statistics --- 00:16:03.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.150 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@433 -- # return 0 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.150 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=81331 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 81331 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 81331 ']' 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:03.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:03.151 18:29:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.151 [2024-05-13 18:29:19.031550] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:03.151 [2024-05-13 18:29:19.031696] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.409 [2024-05-13 18:29:19.171883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.409 [2024-05-13 18:29:19.305949] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.409 [2024-05-13 18:29:19.306044] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.409 [2024-05-13 18:29:19.306068] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.409 [2024-05-13 18:29:19.306079] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.409 [2024-05-13 18:29:19.306088] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.409 [2024-05-13 18:29:19.306268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.409 [2024-05-13 18:29:19.306889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.409 [2024-05-13 18:29:19.307083] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.409 [2024-05-13 18:29:19.307091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.346 18:29:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:04.346 18:29:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:16:04.346 18:29:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:04.346 18:29:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:04.346 18:29:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.346 18:29:20 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.346 18:29:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:04.604 [2024-05-13 18:29:20.312511] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.604 18:29:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:04.862 18:29:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:04.862 18:29:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.121 18:29:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:05.121 18:29:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.384 18:29:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:05.384 18:29:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:05.654 18:29:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:05.654 18:29:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:05.912 18:29:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.170 18:29:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:06.170 18:29:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.429 18:29:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:06.429 18:29:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:06.687 18:29:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:06.687 18:29:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:06.945 18:29:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:07.203 18:29:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:07.203 18:29:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:07.770 18:29:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:07.770 18:29:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:07.770 18:29:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.027 [2024-05-13 18:29:23.957666] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:08.027 [2024-05-13 18:29:23.957990] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.285 18:29:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:08.543 18:29:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:08.804 18:29:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:08.804 18:29:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:08.804 18:29:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:16:08.804 18:29:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:16:08.804 18:29:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:16:08.804 18:29:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:16:08.804 18:29:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:16:10.748 18:29:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:16:10.748 18:29:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:16:10.748 18:29:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:16:10.748 18:29:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:16:10.748 18:29:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.748 18:29:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:16:10.748 18:29:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:11.006 [global] 00:16:11.006 thread=1 00:16:11.006 invalidate=1 00:16:11.006 rw=write 00:16:11.006 time_based=1 00:16:11.006 runtime=1 00:16:11.006 ioengine=libaio 00:16:11.006 direct=1 00:16:11.006 bs=4096 00:16:11.006 iodepth=1 00:16:11.006 norandommap=0 00:16:11.006 numjobs=1 00:16:11.006 00:16:11.006 verify_dump=1 00:16:11.006 verify_backlog=512 00:16:11.006 verify_state_save=0 00:16:11.006 do_verify=1 00:16:11.006 verify=crc32c-intel 00:16:11.006 [job0] 00:16:11.006 filename=/dev/nvme0n1 00:16:11.006 [job1] 00:16:11.006 filename=/dev/nvme0n2 00:16:11.006 [job2] 00:16:11.006 filename=/dev/nvme0n3 00:16:11.006 [job3] 00:16:11.006 filename=/dev/nvme0n4 00:16:11.006 Could not set queue depth (nvme0n1) 00:16:11.006 Could not set queue depth (nvme0n2) 00:16:11.006 Could not set queue depth (nvme0n3) 00:16:11.006 Could not set queue depth (nvme0n4) 00:16:11.006 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.006 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.006 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.006 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:11.006 fio-3.35 00:16:11.006 Starting 4 threads 00:16:12.440 00:16:12.440 job0: (groupid=0, jobs=1): err= 0: pid=81629: Mon May 13 18:29:28 2024 00:16:12.440 read: IOPS=2636, BW=10.3MiB/s (10.8MB/s)(10.3MiB/1001msec) 00:16:12.440 slat (nsec): min=13478, max=57309, avg=16239.85, stdev=3551.46 00:16:12.440 clat (usec): min=145, max=290, avg=170.23, stdev=10.16 00:16:12.440 lat (usec): min=159, max=305, avg=186.47, stdev=11.47 00:16:12.440 clat percentiles (usec): 00:16:12.440 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:16:12.440 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:16:12.440 | 70.00th=[ 176], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 188], 00:16:12.440 | 99.00th=[ 198], 99.50th=[ 204], 99.90th=[ 223], 99.95th=[ 262], 00:16:12.440 | 99.99th=[ 289] 00:16:12.440 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:12.440 slat (usec): min=19, max=129, avg=26.06, stdev= 7.57 00:16:12.440 clat (usec): min=105, max=2057, avg=135.70, stdev=41.41 00:16:12.440 lat (usec): min=127, max=2079, avg=161.77, stdev=44.05 00:16:12.440 clat percentiles (usec): 00:16:12.440 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 123], 00:16:12.440 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:16:12.440 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 165], 95.00th=[ 176], 00:16:12.440 | 99.00th=[ 192], 99.50th=[ 196], 99.90th=[ 265], 99.95th=[ 906], 00:16:12.440 | 99.99th=[ 2057] 00:16:12.440 bw ( KiB/s): min=12288, max=12288, per=25.24%, avg=12288.00, stdev= 0.00, samples=1 00:16:12.440 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:12.440 lat (usec) : 250=99.89%, 500=0.07%, 1000=0.02% 00:16:12.440 lat (msec) : 4=0.02% 00:16:12.440 cpu : usr=2.40%, sys=9.30%, ctx=5711, majf=0, minf=5 00:16:12.440 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.440 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.440 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.440 issued rwts: total=2639,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.440 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.440 job1: (groupid=0, jobs=1): err= 0: pid=81630: Mon May 13 18:29:28 2024 00:16:12.440 read: IOPS=2671, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec) 00:16:12.440 slat (nsec): min=12917, max=46376, avg=17233.68, stdev=3313.48 00:16:12.440 clat (usec): min=143, max=2181, avg=170.45, stdev=44.89 00:16:12.440 lat (usec): min=158, max=2201, avg=187.68, stdev=45.15 00:16:12.440 clat percentiles (usec): 00:16:12.440 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:16:12.441 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:16:12.441 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 188], 00:16:12.441 | 99.00th=[ 206], 99.50th=[ 227], 99.90th=[ 635], 99.95th=[ 857], 00:16:12.441 | 99.99th=[ 2180] 00:16:12.441 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:12.441 slat (usec): min=19, max=110, avg=26.92, stdev= 7.56 00:16:12.441 clat (usec): min=85, max=1756, avg=131.52, stdev=32.29 00:16:12.441 lat (usec): min=127, max=1779, avg=158.44, stdev=33.35 00:16:12.441 clat percentiles (usec): 00:16:12.441 | 1.00th=[ 110], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 123], 00:16:12.441 | 30.00th=[ 125], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:16:12.441 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 151], 00:16:12.441 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 281], 99.95th=[ 445], 00:16:12.441 | 99.99th=[ 1762] 00:16:12.441 bw ( KiB/s): min=12288, max=12288, per=25.24%, avg=12288.00, stdev= 0.00, samples=1 00:16:12.441 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:12.441 lat (usec) : 100=0.03%, 250=99.65%, 500=0.24%, 750=0.02%, 1000=0.02% 00:16:12.441 lat (msec) : 2=0.02%, 4=0.02% 00:16:12.441 cpu : usr=2.70%, sys=9.30%, ctx=5748, majf=0, minf=5 00:16:12.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.441 issued rwts: total=2674,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.441 job2: (groupid=0, jobs=1): err= 0: pid=81631: Mon May 13 18:29:28 2024 00:16:12.441 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:12.441 slat (nsec): min=12338, max=42016, avg=16000.68, stdev=3187.45 00:16:12.441 clat (usec): min=147, max=1610, avg=174.43, stdev=34.07 00:16:12.441 lat (usec): min=162, max=1625, avg=190.43, stdev=34.24 00:16:12.441 clat percentiles (usec): 00:16:12.441 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 165], 00:16:12.441 | 30.00th=[ 169], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:16:12.441 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 192], 00:16:12.441 | 99.00th=[ 212], 99.50th=[ 225], 99.90th=[ 253], 99.95th=[ 963], 00:16:12.441 | 99.99th=[ 1614] 00:16:12.441 write: IOPS=3037, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec); 0 zone resets 00:16:12.441 slat (usec): min=18, max=100, avg=23.22, stdev= 4.57 00:16:12.441 clat (usec): min=110, max=468, avg=142.18, stdev=17.16 00:16:12.441 lat (usec): min=131, max=489, avg=165.40, stdev=19.09 00:16:12.441 clat percentiles (usec): 00:16:12.441 | 1.00th=[ 119], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 130], 00:16:12.441 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 141], 00:16:12.441 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 165], 95.00th=[ 176], 00:16:12.441 | 99.00th=[ 190], 99.50th=[ 196], 99.90th=[ 262], 99.95th=[ 269], 00:16:12.441 | 99.99th=[ 469] 00:16:12.441 bw ( KiB/s): min=12288, max=12288, per=25.24%, avg=12288.00, stdev= 0.00, samples=1 00:16:12.441 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:12.441 lat (usec) : 250=99.88%, 500=0.09%, 1000=0.02% 00:16:12.441 lat (msec) : 2=0.02% 00:16:12.441 cpu : usr=2.20%, sys=8.20%, ctx=5603, majf=0, minf=11 00:16:12.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.441 issued rwts: total=2560,3041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.441 job3: (groupid=0, jobs=1): err= 0: pid=81632: Mon May 13 18:29:28 2024 00:16:12.441 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:12.441 slat (nsec): min=12893, max=37140, avg=16718.39, stdev=2979.11 00:16:12.441 clat (usec): min=153, max=566, avg=177.93, stdev=15.35 00:16:12.441 lat (usec): min=168, max=580, avg=194.65, stdev=15.64 00:16:12.441 clat percentiles (usec): 00:16:12.441 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 167], 00:16:12.441 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:16:12.441 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:16:12.441 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 379], 99.95th=[ 388], 00:16:12.441 | 99.99th=[ 570] 00:16:12.441 write: IOPS=2997, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1001msec); 0 zone resets 00:16:12.441 slat (usec): min=19, max=104, avg=24.61, stdev= 5.38 00:16:12.441 clat (usec): min=112, max=248, avg=139.23, stdev=11.11 00:16:12.441 lat (usec): min=134, max=349, avg=163.84, stdev=12.46 00:16:12.441 clat percentiles (usec): 00:16:12.441 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 131], 00:16:12.441 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:16:12.441 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 159], 00:16:12.441 | 99.00th=[ 172], 99.50th=[ 176], 99.90th=[ 206], 99.95th=[ 247], 00:16:12.441 | 99.99th=[ 249] 00:16:12.441 bw ( KiB/s): min=12288, max=12288, per=25.24%, avg=12288.00, stdev= 0.00, samples=1 00:16:12.441 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:12.441 lat (usec) : 250=99.91%, 500=0.07%, 750=0.02% 00:16:12.441 cpu : usr=1.60%, sys=9.30%, ctx=5560, majf=0, minf=14 00:16:12.441 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:12.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:12.441 issued rwts: total=2560,3000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:12.441 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:12.441 00:16:12.441 Run status group 0 (all jobs): 00:16:12.441 READ: bw=40.7MiB/s (42.7MB/s), 9.99MiB/s-10.4MiB/s (10.5MB/s-10.9MB/s), io=40.8MiB (42.7MB), run=1001-1001msec 00:16:12.441 WRITE: bw=47.5MiB/s (49.9MB/s), 11.7MiB/s-12.0MiB/s (12.3MB/s-12.6MB/s), io=47.6MiB (49.9MB), run=1001-1001msec 00:16:12.441 00:16:12.441 Disk stats (read/write): 00:16:12.441 nvme0n1: ios=2384/2560, merge=0/0, ticks=438/377, in_queue=815, util=87.88% 00:16:12.441 nvme0n2: ios=2422/2560, merge=0/0, ticks=438/362, in_queue=800, util=88.56% 00:16:12.441 nvme0n3: ios=2246/2560, merge=0/0, ticks=407/394, in_queue=801, util=89.16% 00:16:12.441 nvme0n4: ios=2227/2560, merge=0/0, ticks=406/381, in_queue=787, util=89.72% 00:16:12.441 18:29:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:12.441 [global] 00:16:12.441 thread=1 00:16:12.441 invalidate=1 00:16:12.441 rw=randwrite 00:16:12.441 time_based=1 00:16:12.441 runtime=1 00:16:12.441 ioengine=libaio 00:16:12.441 direct=1 00:16:12.441 bs=4096 00:16:12.441 iodepth=1 00:16:12.441 norandommap=0 00:16:12.441 numjobs=1 00:16:12.441 00:16:12.441 verify_dump=1 00:16:12.441 verify_backlog=512 00:16:12.441 verify_state_save=0 00:16:12.441 do_verify=1 00:16:12.441 verify=crc32c-intel 00:16:12.441 [job0] 00:16:12.441 filename=/dev/nvme0n1 00:16:12.441 [job1] 00:16:12.441 filename=/dev/nvme0n2 00:16:12.441 [job2] 00:16:12.441 filename=/dev/nvme0n3 00:16:12.441 [job3] 00:16:12.441 filename=/dev/nvme0n4 00:16:12.441 Could not set queue depth (nvme0n1) 00:16:12.441 Could not set queue depth (nvme0n2) 00:16:12.441 Could not set queue depth (nvme0n3) 00:16:12.441 Could not set queue depth (nvme0n4) 00:16:12.441 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.441 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.441 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.441 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:12.441 fio-3.35 00:16:12.441 Starting 4 threads 00:16:13.828 00:16:13.828 job0: (groupid=0, jobs=1): err= 0: pid=81686: Mon May 13 18:29:29 2024 00:16:13.828 read: IOPS=2770, BW=10.8MiB/s (11.3MB/s)(10.8MiB/1001msec) 00:16:13.828 slat (nsec): min=12593, max=33498, avg=16316.64, stdev=2866.13 00:16:13.828 clat (usec): min=146, max=495, avg=168.54, stdev=11.22 00:16:13.828 lat (usec): min=161, max=510, avg=184.85, stdev=11.68 00:16:13.828 clat percentiles (usec): 00:16:13.828 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:16:13.828 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:16:13.828 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:16:13.828 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 225], 99.95th=[ 289], 00:16:13.828 | 99.99th=[ 494] 00:16:13.828 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:13.828 slat (nsec): min=19161, max=90569, avg=23740.38, stdev=5163.13 00:16:13.828 clat (usec): min=99, max=729, avg=131.45, stdev=17.00 00:16:13.828 lat (usec): min=126, max=755, avg=155.19, stdev=17.96 00:16:13.828 clat percentiles (usec): 00:16:13.828 | 1.00th=[ 113], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 124], 00:16:13.828 | 30.00th=[ 126], 40.00th=[ 128], 50.00th=[ 130], 60.00th=[ 133], 00:16:13.828 | 70.00th=[ 135], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 149], 00:16:13.828 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 255], 99.95th=[ 474], 00:16:13.828 | 99.99th=[ 734] 00:16:13.828 bw ( KiB/s): min=12288, max=12288, per=26.97%, avg=12288.00, stdev= 0.00, samples=1 00:16:13.828 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:13.828 lat (usec) : 100=0.02%, 250=99.88%, 500=0.09%, 750=0.02% 00:16:13.828 cpu : usr=1.30%, sys=9.80%, ctx=5845, majf=0, minf=12 00:16:13.828 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.828 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.829 issued rwts: total=2773,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.829 job1: (groupid=0, jobs=1): err= 0: pid=81687: Mon May 13 18:29:29 2024 00:16:13.829 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:13.829 slat (nsec): min=11467, max=42333, avg=15864.30, stdev=2975.49 00:16:13.829 clat (usec): min=143, max=763, avg=193.96, stdev=59.90 00:16:13.829 lat (usec): min=157, max=778, avg=209.82, stdev=59.49 00:16:13.829 clat percentiles (usec): 00:16:13.829 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:16:13.829 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:16:13.829 | 70.00th=[ 178], 80.00th=[ 204], 90.00th=[ 285], 95.00th=[ 297], 00:16:13.829 | 99.00th=[ 420], 99.50th=[ 469], 99.90th=[ 529], 99.95th=[ 537], 00:16:13.829 | 99.99th=[ 766] 00:16:13.829 write: IOPS=2694, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1001msec); 0 zone resets 00:16:13.829 slat (usec): min=11, max=117, avg=23.13, stdev= 4.72 00:16:13.829 clat (usec): min=104, max=657, avg=145.03, stdev=45.36 00:16:13.829 lat (usec): min=127, max=678, avg=168.16, stdev=44.82 00:16:13.829 clat percentiles (usec): 00:16:13.829 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:16:13.829 | 30.00th=[ 127], 40.00th=[ 129], 50.00th=[ 131], 60.00th=[ 135], 00:16:13.829 | 70.00th=[ 137], 80.00th=[ 143], 90.00th=[ 219], 95.00th=[ 253], 00:16:13.829 | 99.00th=[ 314], 99.50th=[ 355], 99.90th=[ 586], 99.95th=[ 627], 00:16:13.829 | 99.99th=[ 660] 00:16:13.829 bw ( KiB/s): min=12288, max=12288, per=26.97%, avg=12288.00, stdev= 0.00, samples=1 00:16:13.829 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:13.829 lat (usec) : 250=87.90%, 500=11.91%, 750=0.17%, 1000=0.02% 00:16:13.829 cpu : usr=1.60%, sys=8.20%, ctx=5257, majf=0, minf=7 00:16:13.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.829 issued rwts: total=2560,2697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.829 job2: (groupid=0, jobs=1): err= 0: pid=81688: Mon May 13 18:29:29 2024 00:16:13.829 read: IOPS=2747, BW=10.7MiB/s (11.3MB/s)(10.7MiB/1001msec) 00:16:13.829 slat (nsec): min=12568, max=36244, avg=15272.18, stdev=2496.39 00:16:13.829 clat (usec): min=145, max=593, avg=170.23, stdev=15.09 00:16:13.829 lat (usec): min=160, max=608, avg=185.50, stdev=15.32 00:16:13.829 clat percentiles (usec): 00:16:13.829 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:16:13.829 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 172], 00:16:13.829 | 70.00th=[ 174], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 186], 00:16:13.829 | 99.00th=[ 196], 99.50th=[ 202], 99.90th=[ 424], 99.95th=[ 529], 00:16:13.829 | 99.99th=[ 594] 00:16:13.829 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:13.829 slat (nsec): min=18639, max=89525, avg=22064.97, stdev=4738.71 00:16:13.829 clat (usec): min=108, max=1980, avg=134.44, stdev=38.29 00:16:13.829 lat (usec): min=130, max=2001, avg=156.50, stdev=38.76 00:16:13.829 clat percentiles (usec): 00:16:13.829 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 126], 00:16:13.829 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 135], 00:16:13.829 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 145], 95.00th=[ 149], 00:16:13.829 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 457], 99.95th=[ 807], 00:16:13.829 | 99.99th=[ 1975] 00:16:13.829 bw ( KiB/s): min=12288, max=12288, per=26.97%, avg=12288.00, stdev= 0.00, samples=2 00:16:13.829 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:13.829 lat (usec) : 250=99.81%, 500=0.10%, 750=0.05%, 1000=0.02% 00:16:13.829 lat (msec) : 2=0.02% 00:16:13.829 cpu : usr=2.40%, sys=7.80%, ctx=5824, majf=0, minf=15 00:16:13.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.829 issued rwts: total=2750,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.829 job3: (groupid=0, jobs=1): err= 0: pid=81689: Mon May 13 18:29:29 2024 00:16:13.829 read: IOPS=2521, BW=9.85MiB/s (10.3MB/s)(9.86MiB/1001msec) 00:16:13.829 slat (nsec): min=9739, max=50369, avg=15380.45, stdev=2750.02 00:16:13.829 clat (usec): min=149, max=2048, avg=197.46, stdev=64.09 00:16:13.829 lat (usec): min=164, max=2063, avg=212.84, stdev=63.45 00:16:13.829 clat percentiles (usec): 00:16:13.829 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 165], 00:16:13.829 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 176], 60.00th=[ 180], 00:16:13.829 | 70.00th=[ 186], 80.00th=[ 208], 90.00th=[ 285], 95.00th=[ 293], 00:16:13.829 | 99.00th=[ 396], 99.50th=[ 408], 99.90th=[ 537], 99.95th=[ 562], 00:16:13.829 | 99.99th=[ 2057] 00:16:13.829 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:16:13.829 slat (usec): min=11, max=105, avg=23.58, stdev= 5.70 00:16:13.829 clat (usec): min=111, max=661, avg=153.66, stdev=45.53 00:16:13.829 lat (usec): min=132, max=676, avg=177.24, stdev=44.66 00:16:13.829 clat percentiles (usec): 00:16:13.829 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 127], 20.00th=[ 130], 00:16:13.829 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:16:13.829 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 231], 95.00th=[ 255], 00:16:13.829 | 99.00th=[ 318], 99.50th=[ 371], 99.90th=[ 502], 99.95th=[ 529], 00:16:13.829 | 99.99th=[ 660] 00:16:13.829 bw ( KiB/s): min=12288, max=12288, per=26.97%, avg=12288.00, stdev= 0.00, samples=1 00:16:13.829 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:13.829 lat (usec) : 250=87.73%, 500=12.14%, 750=0.12% 00:16:13.829 lat (msec) : 4=0.02% 00:16:13.829 cpu : usr=1.90%, sys=7.40%, ctx=5084, majf=0, minf=11 00:16:13.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:13.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.829 issued rwts: total=2524,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:13.829 00:16:13.829 Run status group 0 (all jobs): 00:16:13.829 READ: bw=41.4MiB/s (43.4MB/s), 9.85MiB/s-10.8MiB/s (10.3MB/s-11.3MB/s), io=41.4MiB (43.4MB), run=1001-1001msec 00:16:13.829 WRITE: bw=44.5MiB/s (46.7MB/s), 9.99MiB/s-12.0MiB/s (10.5MB/s-12.6MB/s), io=44.5MiB (46.7MB), run=1001-1001msec 00:16:13.829 00:16:13.829 Disk stats (read/write): 00:16:13.829 nvme0n1: ios=2515/2560, merge=0/0, ticks=444/365, in_queue=809, util=88.55% 00:16:13.829 nvme0n2: ios=2281/2560, merge=0/0, ticks=441/387, in_queue=828, util=89.48% 00:16:13.829 nvme0n3: ios=2440/2560, merge=0/0, ticks=428/374, in_queue=802, util=89.22% 00:16:13.829 nvme0n4: ios=2096/2560, merge=0/0, ticks=383/419, in_queue=802, util=89.87% 00:16:13.829 18:29:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:13.829 [global] 00:16:13.829 thread=1 00:16:13.829 invalidate=1 00:16:13.829 rw=write 00:16:13.829 time_based=1 00:16:13.829 runtime=1 00:16:13.829 ioengine=libaio 00:16:13.829 direct=1 00:16:13.829 bs=4096 00:16:13.829 iodepth=128 00:16:13.829 norandommap=0 00:16:13.829 numjobs=1 00:16:13.829 00:16:13.829 verify_dump=1 00:16:13.829 verify_backlog=512 00:16:13.829 verify_state_save=0 00:16:13.829 do_verify=1 00:16:13.829 verify=crc32c-intel 00:16:13.829 [job0] 00:16:13.829 filename=/dev/nvme0n1 00:16:13.829 [job1] 00:16:13.829 filename=/dev/nvme0n2 00:16:13.829 [job2] 00:16:13.829 filename=/dev/nvme0n3 00:16:13.829 [job3] 00:16:13.829 filename=/dev/nvme0n4 00:16:13.829 Could not set queue depth (nvme0n1) 00:16:13.829 Could not set queue depth (nvme0n2) 00:16:13.829 Could not set queue depth (nvme0n3) 00:16:13.829 Could not set queue depth (nvme0n4) 00:16:13.829 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.829 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.829 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.829 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:13.829 fio-3.35 00:16:13.829 Starting 4 threads 00:16:15.204 00:16:15.204 job0: (groupid=0, jobs=1): err= 0: pid=81743: Mon May 13 18:29:30 2024 00:16:15.204 read: IOPS=5588, BW=21.8MiB/s (22.9MB/s)(21.9MiB/1003msec) 00:16:15.204 slat (usec): min=5, max=3815, avg=86.84, stdev=448.89 00:16:15.204 clat (usec): min=349, max=15485, avg=11491.44, stdev=1183.42 00:16:15.204 lat (usec): min=3006, max=15673, avg=11578.28, stdev=1222.79 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[ 6783], 5.00th=[ 9896], 10.00th=[10552], 20.00th=[11076], 00:16:15.204 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:16:15.204 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[13042], 00:16:15.204 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15139], 99.95th=[15401], 00:16:15.204 | 99.99th=[15533] 00:16:15.204 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:16:15.204 slat (usec): min=8, max=3417, avg=83.73, stdev=401.43 00:16:15.204 clat (usec): min=7741, max=14631, avg=11064.97, stdev=1200.07 00:16:15.204 lat (usec): min=7760, max=14661, avg=11148.70, stdev=1171.40 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9634], 00:16:15.204 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:16:15.204 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12387], 00:16:15.204 | 99.00th=[12911], 99.50th=[13304], 99.90th=[13698], 99.95th=[14091], 00:16:15.204 | 99.99th=[14615] 00:16:15.204 bw ( KiB/s): min=21720, max=23336, per=35.16%, avg=22528.00, stdev=1142.68, samples=2 00:16:15.204 iops : min= 5430, max= 5834, avg=5632.00, stdev=285.67, samples=2 00:16:15.204 lat (usec) : 500=0.01% 00:16:15.204 lat (msec) : 4=0.37%, 10=13.95%, 20=85.66% 00:16:15.204 cpu : usr=4.79%, sys=15.07%, ctx=398, majf=0, minf=1 00:16:15.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:15.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.204 issued rwts: total=5605,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.204 job1: (groupid=0, jobs=1): err= 0: pid=81744: Mon May 13 18:29:30 2024 00:16:15.204 read: IOPS=2384, BW=9537KiB/s (9766kB/s)(9604KiB/1007msec) 00:16:15.204 slat (usec): min=6, max=8854, avg=210.64, stdev=954.98 00:16:15.204 clat (usec): min=1461, max=37795, avg=26803.14, stdev=5383.63 00:16:15.204 lat (usec): min=7321, max=40974, avg=27013.77, stdev=5330.11 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[ 8225], 5.00th=[17957], 10.00th=[21627], 20.00th=[23200], 00:16:15.204 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26084], 60.00th=[26608], 00:16:15.204 | 70.00th=[28443], 80.00th=[31589], 90.00th=[34341], 95.00th=[36963], 00:16:15.204 | 99.00th=[37487], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:16:15.204 | 99.99th=[38011] 00:16:15.204 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:16:15.204 slat (usec): min=12, max=5654, avg=184.93, stdev=735.27 00:16:15.204 clat (usec): min=10446, max=38679, avg=24287.47, stdev=5555.96 00:16:15.204 lat (usec): min=10496, max=38710, avg=24472.40, stdev=5556.84 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[13960], 5.00th=[17433], 10.00th=[17695], 20.00th=[18744], 00:16:15.204 | 30.00th=[21365], 40.00th=[22938], 50.00th=[24249], 60.00th=[24773], 00:16:15.204 | 70.00th=[25297], 80.00th=[27395], 90.00th=[33424], 95.00th=[36439], 00:16:15.204 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[38536], 00:16:15.204 | 99.99th=[38536] 00:16:15.204 bw ( KiB/s): min=10120, max=10360, per=15.98%, avg=10240.00, stdev=169.71, samples=2 00:16:15.204 iops : min= 2530, max= 2590, avg=2560.00, stdev=42.43, samples=2 00:16:15.204 lat (msec) : 2=0.02%, 10=0.65%, 20=14.65%, 50=84.68% 00:16:15.204 cpu : usr=3.28%, sys=8.15%, ctx=306, majf=0, minf=7 00:16:15.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:16:15.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.204 issued rwts: total=2401,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.204 job2: (groupid=0, jobs=1): err= 0: pid=81745: Mon May 13 18:29:30 2024 00:16:15.204 read: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec) 00:16:15.204 slat (usec): min=7, max=8207, avg=208.17, stdev=873.49 00:16:15.204 clat (usec): min=16962, max=39217, avg=26279.34, stdev=3766.42 00:16:15.204 lat (usec): min=18261, max=41349, avg=26487.51, stdev=3715.47 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[19792], 5.00th=[21103], 10.00th=[22414], 20.00th=[23987], 00:16:15.204 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26084], 00:16:15.204 | 70.00th=[26608], 80.00th=[27395], 90.00th=[30802], 95.00th=[35914], 00:16:15.204 | 99.00th=[39060], 99.50th=[39060], 99.90th=[39060], 99.95th=[39060], 00:16:15.204 | 99.99th=[39060] 00:16:15.204 write: IOPS=2809, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1003msec); 0 zone resets 00:16:15.204 slat (usec): min=12, max=13054, avg=157.37, stdev=760.09 00:16:15.204 clat (usec): min=507, max=36307, avg=20969.90, stdev=4872.18 00:16:15.204 lat (usec): min=3993, max=38916, avg=21127.27, stdev=4834.74 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[ 5145], 5.00th=[15533], 10.00th=[17957], 20.00th=[18220], 00:16:15.204 | 30.00th=[18482], 40.00th=[18744], 50.00th=[19006], 60.00th=[20055], 00:16:15.204 | 70.00th=[22938], 80.00th=[24249], 90.00th=[27657], 95.00th=[31327], 00:16:15.204 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:16:15.204 | 99.99th=[36439] 00:16:15.204 bw ( KiB/s): min= 9232, max=12288, per=16.79%, avg=10760.00, stdev=2160.92, samples=2 00:16:15.204 iops : min= 2308, max= 3072, avg=2690.00, stdev=540.23, samples=2 00:16:15.204 lat (usec) : 750=0.02% 00:16:15.204 lat (msec) : 4=0.02%, 10=0.58%, 20=31.46%, 50=67.92% 00:16:15.204 cpu : usr=3.19%, sys=8.38%, ctx=231, majf=0, minf=6 00:16:15.204 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:16:15.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.204 issued rwts: total=2560,2818,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.204 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.204 job3: (groupid=0, jobs=1): err= 0: pid=81746: Mon May 13 18:29:30 2024 00:16:15.204 read: IOPS=4793, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1002msec) 00:16:15.204 slat (usec): min=5, max=5819, avg=99.09, stdev=483.36 00:16:15.204 clat (usec): min=1128, max=19646, avg=12948.96, stdev=1575.01 00:16:15.204 lat (usec): min=1143, max=19690, avg=13048.05, stdev=1612.56 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[ 8356], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[12256], 00:16:15.204 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:16:15.204 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15008], 95.00th=[15533], 00:16:15.204 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17695], 99.95th=[18482], 00:16:15.204 | 99.99th=[19530] 00:16:15.204 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:16:15.204 slat (usec): min=11, max=6399, avg=94.62, stdev=478.50 00:16:15.204 clat (usec): min=7310, max=20378, avg=12586.63, stdev=1311.73 00:16:15.204 lat (usec): min=7335, max=20409, avg=12681.25, stdev=1371.92 00:16:15.204 clat percentiles (usec): 00:16:15.204 | 1.00th=[ 8455], 5.00th=[10552], 10.00th=[11469], 20.00th=[11863], 00:16:15.205 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12518], 60.00th=[12649], 00:16:15.205 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13829], 95.00th=[14353], 00:16:15.205 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:16:15.205 | 99.99th=[20317] 00:16:15.205 bw ( KiB/s): min=20480, max=20480, per=31.96%, avg=20480.00, stdev= 0.00, samples=1 00:16:15.205 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:16:15.205 lat (msec) : 2=0.07%, 10=4.06%, 20=95.86%, 50=0.01% 00:16:15.205 cpu : usr=4.20%, sys=14.19%, ctx=531, majf=0, minf=1 00:16:15.205 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:15.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:15.205 issued rwts: total=4803,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.205 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:15.205 00:16:15.205 Run status group 0 (all jobs): 00:16:15.205 READ: bw=59.6MiB/s (62.5MB/s), 9537KiB/s-21.8MiB/s (9766kB/s-22.9MB/s), io=60.0MiB (63.0MB), run=1002-1007msec 00:16:15.205 WRITE: bw=62.6MiB/s (65.6MB/s), 9.93MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=63.0MiB (66.1MB), run=1002-1007msec 00:16:15.205 00:16:15.205 Disk stats (read/write): 00:16:15.205 nvme0n1: ios=4658/5031, merge=0/0, ticks=16088/15417, in_queue=31505, util=87.49% 00:16:15.205 nvme0n2: ios=2082/2215, merge=0/0, ticks=13501/12224, in_queue=25725, util=88.24% 00:16:15.205 nvme0n3: ios=2048/2560, merge=0/0, ticks=13586/11794, in_queue=25380, util=89.15% 00:16:15.205 nvme0n4: ios=4096/4467, merge=0/0, ticks=25200/23796, in_queue=48996, util=89.40% 00:16:15.205 18:29:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:15.205 [global] 00:16:15.205 thread=1 00:16:15.205 invalidate=1 00:16:15.205 rw=randwrite 00:16:15.205 time_based=1 00:16:15.205 runtime=1 00:16:15.205 ioengine=libaio 00:16:15.205 direct=1 00:16:15.205 bs=4096 00:16:15.205 iodepth=128 00:16:15.205 norandommap=0 00:16:15.205 numjobs=1 00:16:15.205 00:16:15.205 verify_dump=1 00:16:15.205 verify_backlog=512 00:16:15.205 verify_state_save=0 00:16:15.205 do_verify=1 00:16:15.205 verify=crc32c-intel 00:16:15.205 [job0] 00:16:15.205 filename=/dev/nvme0n1 00:16:15.205 [job1] 00:16:15.205 filename=/dev/nvme0n2 00:16:15.205 [job2] 00:16:15.205 filename=/dev/nvme0n3 00:16:15.205 [job3] 00:16:15.205 filename=/dev/nvme0n4 00:16:15.205 Could not set queue depth (nvme0n1) 00:16:15.205 Could not set queue depth (nvme0n2) 00:16:15.205 Could not set queue depth (nvme0n3) 00:16:15.205 Could not set queue depth (nvme0n4) 00:16:15.205 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.205 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.205 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.205 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:15.205 fio-3.35 00:16:15.205 Starting 4 threads 00:16:16.582 00:16:16.582 job0: (groupid=0, jobs=1): err= 0: pid=81805: Mon May 13 18:29:32 2024 00:16:16.582 read: IOPS=1873, BW=7495KiB/s (7674kB/s)(7532KiB/1005msec) 00:16:16.582 slat (usec): min=4, max=10937, avg=222.02, stdev=1034.28 00:16:16.582 clat (usec): min=2471, max=57775, avg=26109.76, stdev=6657.50 00:16:16.582 lat (usec): min=5293, max=57795, avg=26331.77, stdev=6784.48 00:16:16.582 clat percentiles (usec): 00:16:16.582 | 1.00th=[ 5735], 5.00th=[16057], 10.00th=[21103], 20.00th=[21890], 00:16:16.582 | 30.00th=[22676], 40.00th=[23200], 50.00th=[25560], 60.00th=[28443], 00:16:16.582 | 70.00th=[30016], 80.00th=[31065], 90.00th=[32113], 95.00th=[34866], 00:16:16.582 | 99.00th=[50594], 99.50th=[51119], 99.90th=[57410], 99.95th=[57934], 00:16:16.582 | 99.99th=[57934] 00:16:16.582 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:16:16.582 slat (usec): min=5, max=13100, avg=276.92, stdev=1161.29 00:16:16.582 clat (usec): min=16791, max=78750, avg=37961.31, stdev=15822.11 00:16:16.582 lat (usec): min=16809, max=79983, avg=38238.23, stdev=15942.74 00:16:16.582 clat percentiles (usec): 00:16:16.582 | 1.00th=[18744], 5.00th=[19530], 10.00th=[20055], 20.00th=[21890], 00:16:16.582 | 30.00th=[28443], 40.00th=[30016], 50.00th=[33162], 60.00th=[41681], 00:16:16.582 | 70.00th=[43779], 80.00th=[50070], 90.00th=[63701], 95.00th=[72877], 00:16:16.582 | 99.00th=[74974], 99.50th=[74974], 99.90th=[77071], 99.95th=[78119], 00:16:16.582 | 99.99th=[79168] 00:16:16.582 bw ( KiB/s): min= 7304, max= 9080, per=15.56%, avg=8192.00, stdev=1255.82, samples=2 00:16:16.582 iops : min= 1826, max= 2270, avg=2048.00, stdev=313.96, samples=2 00:16:16.582 lat (msec) : 4=0.03%, 10=0.81%, 20=7.89%, 50=80.46%, 100=10.81% 00:16:16.582 cpu : usr=2.29%, sys=5.28%, ctx=595, majf=0, minf=9 00:16:16.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:16.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.582 issued rwts: total=1883,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.582 job1: (groupid=0, jobs=1): err= 0: pid=81806: Mon May 13 18:29:32 2024 00:16:16.582 read: IOPS=6608, BW=25.8MiB/s (27.1MB/s)(25.9MiB/1005msec) 00:16:16.582 slat (usec): min=3, max=8940, avg=77.85, stdev=485.90 00:16:16.582 clat (usec): min=2558, max=18905, avg=10232.52, stdev=2359.83 00:16:16.582 lat (usec): min=3942, max=18921, avg=10310.37, stdev=2385.56 00:16:16.582 clat percentiles (usec): 00:16:16.582 | 1.00th=[ 5407], 5.00th=[ 7373], 10.00th=[ 7898], 20.00th=[ 8717], 00:16:16.582 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9765], 00:16:16.582 | 70.00th=[10814], 80.00th=[11600], 90.00th=[13698], 95.00th=[15533], 00:16:16.582 | 99.00th=[17695], 99.50th=[17957], 99.90th=[18482], 99.95th=[18744], 00:16:16.582 | 99.99th=[19006] 00:16:16.582 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:16:16.582 slat (usec): min=4, max=7350, avg=65.83, stdev=354.28 00:16:16.582 clat (usec): min=3249, max=18829, avg=8919.29, stdev=1740.42 00:16:16.582 lat (usec): min=3283, max=18838, avg=8985.11, stdev=1774.45 00:16:16.582 clat percentiles (usec): 00:16:16.582 | 1.00th=[ 4015], 5.00th=[ 4948], 10.00th=[ 5932], 20.00th=[ 7898], 00:16:16.582 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9503], 60.00th=[ 9765], 00:16:16.582 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10290], 95.00th=[10552], 00:16:16.582 | 99.00th=[10945], 99.50th=[13698], 99.90th=[17957], 99.95th=[18220], 00:16:16.582 | 99.99th=[18744] 00:16:16.582 bw ( KiB/s): min=25149, max=28104, per=50.58%, avg=26626.50, stdev=2089.50, samples=2 00:16:16.582 iops : min= 6287, max= 7026, avg=6656.50, stdev=522.55, samples=2 00:16:16.582 lat (msec) : 4=0.53%, 10=67.49%, 20=31.98% 00:16:16.582 cpu : usr=5.08%, sys=15.34%, ctx=814, majf=0, minf=11 00:16:16.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:16:16.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.582 issued rwts: total=6642,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.582 job2: (groupid=0, jobs=1): err= 0: pid=81807: Mon May 13 18:29:32 2024 00:16:16.582 read: IOPS=1867, BW=7468KiB/s (7647kB/s)(7528KiB/1008msec) 00:16:16.582 slat (usec): min=4, max=11433, avg=227.50, stdev=1078.16 00:16:16.582 clat (usec): min=4808, max=58686, avg=26887.70, stdev=7148.01 00:16:16.582 lat (usec): min=9157, max=58722, avg=27115.20, stdev=7249.77 00:16:16.582 clat percentiles (usec): 00:16:16.582 | 1.00th=[ 9372], 5.00th=[17695], 10.00th=[20579], 20.00th=[21627], 00:16:16.582 | 30.00th=[22414], 40.00th=[22938], 50.00th=[26608], 60.00th=[28967], 00:16:16.582 | 70.00th=[30278], 80.00th=[31327], 90.00th=[32637], 95.00th=[36963], 00:16:16.582 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[58459], 00:16:16.582 | 99.99th=[58459] 00:16:16.582 write: IOPS=2031, BW=8127KiB/s (8322kB/s)(8192KiB/1008msec); 0 zone resets 00:16:16.582 slat (usec): min=5, max=13258, avg=272.23, stdev=1099.14 00:16:16.582 clat (usec): min=18476, max=77984, avg=37294.11, stdev=15838.26 00:16:16.582 lat (usec): min=18503, max=78597, avg=37566.34, stdev=15964.24 00:16:16.582 clat percentiles (usec): 00:16:16.582 | 1.00th=[19268], 5.00th=[19792], 10.00th=[20055], 20.00th=[22414], 00:16:16.582 | 30.00th=[26346], 40.00th=[29492], 50.00th=[30540], 60.00th=[40633], 00:16:16.582 | 70.00th=[42730], 80.00th=[50070], 90.00th=[63177], 95.00th=[72877], 00:16:16.582 | 99.00th=[74974], 99.50th=[74974], 99.90th=[77071], 99.95th=[77071], 00:16:16.582 | 99.99th=[78119] 00:16:16.582 bw ( KiB/s): min= 7153, max= 9234, per=15.57%, avg=8193.50, stdev=1471.49, samples=2 00:16:16.582 iops : min= 1788, max= 2308, avg=2048.00, stdev=367.70, samples=2 00:16:16.582 lat (msec) : 10=0.64%, 20=7.43%, 50=80.10%, 100=11.83% 00:16:16.582 cpu : usr=2.48%, sys=5.36%, ctx=515, majf=0, minf=15 00:16:16.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:16:16.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.582 issued rwts: total=1882,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.582 job3: (groupid=0, jobs=1): err= 0: pid=81808: Mon May 13 18:29:32 2024 00:16:16.582 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:16:16.582 slat (usec): min=6, max=12349, avg=169.62, stdev=945.24 00:16:16.582 clat (usec): min=9669, max=85085, avg=21887.13, stdev=10390.24 00:16:16.582 lat (usec): min=9697, max=87917, avg=22056.75, stdev=10470.29 00:16:16.582 clat percentiles (usec): 00:16:16.582 | 1.00th=[10683], 5.00th=[12125], 10.00th=[12649], 20.00th=[13173], 00:16:16.582 | 30.00th=[13566], 40.00th=[14877], 50.00th=[17433], 60.00th=[23462], 00:16:16.582 | 70.00th=[29230], 80.00th=[30540], 90.00th=[32113], 95.00th=[40109], 00:16:16.582 | 99.00th=[50070], 99.50th=[57934], 99.90th=[85459], 99.95th=[85459], 00:16:16.582 | 99.99th=[85459] 00:16:16.582 write: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(9.97MiB/1011msec); 0 zone resets 00:16:16.582 slat (usec): min=12, max=22625, avg=246.74, stdev=1160.77 00:16:16.582 clat (msec): min=10, max=123, avg=32.55, stdev=21.37 00:16:16.582 lat (msec): min=10, max=129, avg=32.80, stdev=21.50 00:16:16.582 clat percentiles (msec): 00:16:16.582 | 1.00th=[ 14], 5.00th=[ 19], 10.00th=[ 21], 20.00th=[ 22], 00:16:16.582 | 30.00th=[ 22], 40.00th=[ 22], 50.00th=[ 23], 60.00th=[ 26], 00:16:16.582 | 70.00th=[ 31], 80.00th=[ 41], 90.00th=[ 60], 95.00th=[ 85], 00:16:16.582 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 124], 99.95th=[ 124], 00:16:16.582 | 99.99th=[ 124] 00:16:16.582 bw ( KiB/s): min= 8192, max=11185, per=18.41%, avg=9688.50, stdev=2116.37, samples=2 00:16:16.582 iops : min= 2048, max= 2796, avg=2422.00, stdev=528.92, samples=2 00:16:16.582 lat (msec) : 10=0.17%, 20=29.00%, 50=63.37%, 100=5.59%, 250=1.87% 00:16:16.582 cpu : usr=2.67%, sys=8.02%, ctx=353, majf=0, minf=8 00:16:16.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:16:16.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:16.582 issued rwts: total=2048,2552,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.582 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:16.582 00:16:16.582 Run status group 0 (all jobs): 00:16:16.582 READ: bw=48.1MiB/s (50.5MB/s), 7468KiB/s-25.8MiB/s (7647kB/s-27.1MB/s), io=48.7MiB (51.0MB), run=1005-1011msec 00:16:16.582 WRITE: bw=51.4MiB/s (53.9MB/s), 8127KiB/s-25.9MiB/s (8322kB/s-27.1MB/s), io=52.0MiB (54.5MB), run=1005-1011msec 00:16:16.582 00:16:16.582 Disk stats (read/write): 00:16:16.582 nvme0n1: ios=1586/1822, merge=0/0, ticks=19848/30769, in_queue=50617, util=87.27% 00:16:16.582 nvme0n2: ios=5681/5823, merge=0/0, ticks=52978/48885, in_queue=101863, util=89.09% 00:16:16.582 nvme0n3: ios=1536/1841, merge=0/0, ticks=20851/29581, in_queue=50432, util=88.44% 00:16:16.582 nvme0n4: ios=1784/2048, merge=0/0, ticks=19168/32738, in_queue=51906, util=89.82% 00:16:16.582 18:29:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:16:16.582 18:29:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=81823 00:16:16.582 18:29:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:16.582 18:29:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:16:16.582 [global] 00:16:16.582 thread=1 00:16:16.582 invalidate=1 00:16:16.582 rw=read 00:16:16.582 time_based=1 00:16:16.582 runtime=10 00:16:16.582 ioengine=libaio 00:16:16.582 direct=1 00:16:16.582 bs=4096 00:16:16.582 iodepth=1 00:16:16.582 norandommap=1 00:16:16.582 numjobs=1 00:16:16.582 00:16:16.582 [job0] 00:16:16.582 filename=/dev/nvme0n1 00:16:16.582 [job1] 00:16:16.582 filename=/dev/nvme0n2 00:16:16.582 [job2] 00:16:16.582 filename=/dev/nvme0n3 00:16:16.582 [job3] 00:16:16.582 filename=/dev/nvme0n4 00:16:16.582 Could not set queue depth (nvme0n1) 00:16:16.582 Could not set queue depth (nvme0n2) 00:16:16.582 Could not set queue depth (nvme0n3) 00:16:16.582 Could not set queue depth (nvme0n4) 00:16:16.582 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.582 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.582 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.582 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.582 fio-3.35 00:16:16.582 Starting 4 threads 00:16:19.864 18:29:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:19.864 fio: pid=81866, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:19.864 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=63209472, buflen=4096 00:16:19.864 18:29:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:19.864 fio: pid=81865, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:19.864 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=44277760, buflen=4096 00:16:19.864 18:29:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:19.864 18:29:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:20.122 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=49774592, buflen=4096 00:16:20.122 fio: pid=81863, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.122 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.122 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:20.381 fio: pid=81864, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:20.381 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=17678336, buflen=4096 00:16:20.381 00:16:20.381 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81863: Mon May 13 18:29:36 2024 00:16:20.381 read: IOPS=3514, BW=13.7MiB/s (14.4MB/s)(47.5MiB/3458msec) 00:16:20.381 slat (usec): min=10, max=14691, avg=18.21, stdev=199.27 00:16:20.381 clat (usec): min=42, max=4080, avg=264.77, stdev=85.10 00:16:20.381 lat (usec): min=148, max=14963, avg=282.98, stdev=216.55 00:16:20.381 clat percentiles (usec): 00:16:20.381 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 176], 00:16:20.381 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 293], 00:16:20.381 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:16:20.381 | 99.00th=[ 375], 99.50th=[ 404], 99.90th=[ 857], 99.95th=[ 1795], 00:16:20.381 | 99.99th=[ 3490] 00:16:20.381 bw ( KiB/s): min=12336, max=16944, per=21.31%, avg=13524.00, stdev=1694.66, samples=6 00:16:20.381 iops : min= 3084, max= 4236, avg=3381.00, stdev=423.67, samples=6 00:16:20.381 lat (usec) : 50=0.01%, 100=0.01%, 250=23.28%, 500=76.55%, 750=0.04% 00:16:20.381 lat (usec) : 1000=0.02% 00:16:20.381 lat (msec) : 2=0.04%, 4=0.03%, 10=0.01% 00:16:20.381 cpu : usr=1.07%, sys=4.45%, ctx=12169, majf=0, minf=1 00:16:20.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 issued rwts: total=12153,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.381 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81864: Mon May 13 18:29:36 2024 00:16:20.381 read: IOPS=5558, BW=21.7MiB/s (22.8MB/s)(80.9MiB/3724msec) 00:16:20.381 slat (usec): min=11, max=15827, avg=17.91, stdev=170.28 00:16:20.381 clat (usec): min=5, max=2133, avg=160.63, stdev=46.29 00:16:20.381 lat (usec): min=137, max=16104, avg=178.55, stdev=177.20 00:16:20.381 clat percentiles (usec): 00:16:20.381 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 149], 00:16:20.381 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:16:20.381 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:16:20.381 | 99.00th=[ 212], 99.50th=[ 310], 99.90th=[ 848], 99.95th=[ 1139], 00:16:20.381 | 99.99th=[ 1844] 00:16:20.381 bw ( KiB/s): min=20680, max=22952, per=34.99%, avg=22212.57, stdev=859.42, samples=7 00:16:20.381 iops : min= 5170, max= 5738, avg=5553.14, stdev=214.85, samples=7 00:16:20.381 lat (usec) : 10=0.01%, 250=99.34%, 500=0.43%, 750=0.10%, 1000=0.04% 00:16:20.381 lat (msec) : 2=0.07%, 4=0.01% 00:16:20.381 cpu : usr=1.61%, sys=6.69%, ctx=20715, majf=0, minf=1 00:16:20.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 issued rwts: total=20701,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.381 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81865: Mon May 13 18:29:36 2024 00:16:20.381 read: IOPS=3370, BW=13.2MiB/s (13.8MB/s)(42.2MiB/3208msec) 00:16:20.381 slat (usec): min=10, max=7812, avg=15.99, stdev=102.41 00:16:20.381 clat (usec): min=142, max=2731, avg=279.16, stdev=56.79 00:16:20.381 lat (usec): min=157, max=8091, avg=295.15, stdev=116.31 00:16:20.381 clat percentiles (usec): 00:16:20.381 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 190], 20.00th=[ 273], 00:16:20.381 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 293], 00:16:20.381 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 310], 95.00th=[ 318], 00:16:20.381 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 474], 99.95th=[ 824], 00:16:20.381 | 99.99th=[ 2409] 00:16:20.381 bw ( KiB/s): min=12904, max=16248, per=21.39%, avg=13576.00, stdev=1318.19, samples=6 00:16:20.381 iops : min= 3226, max= 4062, avg=3394.00, stdev=329.55, samples=6 00:16:20.381 lat (usec) : 250=13.95%, 500=85.96%, 750=0.03%, 1000=0.02% 00:16:20.381 lat (msec) : 2=0.01%, 4=0.03% 00:16:20.381 cpu : usr=1.25%, sys=4.05%, ctx=10825, majf=0, minf=1 00:16:20.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 issued rwts: total=10811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.381 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=81866: Mon May 13 18:29:36 2024 00:16:20.381 read: IOPS=5205, BW=20.3MiB/s (21.3MB/s)(60.3MiB/2965msec) 00:16:20.381 slat (usec): min=12, max=106, avg=16.24, stdev= 4.22 00:16:20.381 clat (usec): min=146, max=2202, avg=174.43, stdev=28.95 00:16:20.381 lat (usec): min=160, max=2219, avg=190.67, stdev=29.57 00:16:20.381 clat percentiles (usec): 00:16:20.381 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:16:20.381 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 176], 00:16:20.381 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:16:20.381 | 99.00th=[ 208], 99.50th=[ 219], 99.90th=[ 412], 99.95th=[ 578], 00:16:20.381 | 99.99th=[ 2040] 00:16:20.381 bw ( KiB/s): min=20472, max=21184, per=32.96%, avg=20921.60, stdev=296.13, samples=5 00:16:20.381 iops : min= 5118, max= 5296, avg=5230.40, stdev=74.03, samples=5 00:16:20.381 lat (usec) : 250=99.76%, 500=0.16%, 750=0.05%, 1000=0.02% 00:16:20.381 lat (msec) : 4=0.01% 00:16:20.381 cpu : usr=1.62%, sys=6.71%, ctx=15436, majf=0, minf=1 00:16:20.381 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:20.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.381 issued rwts: total=15433,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.381 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:20.381 00:16:20.381 Run status group 0 (all jobs): 00:16:20.381 READ: bw=62.0MiB/s (65.0MB/s), 13.2MiB/s-21.7MiB/s (13.8MB/s-22.8MB/s), io=231MiB (242MB), run=2965-3724msec 00:16:20.381 00:16:20.381 Disk stats (read/write): 00:16:20.381 nvme0n1: ios=11686/0, merge=0/0, ticks=3126/0, in_queue=3126, util=95.19% 00:16:20.381 nvme0n2: ios=20077/0, merge=0/0, ticks=3324/0, in_queue=3324, util=95.43% 00:16:20.381 nvme0n3: ios=10534/0, merge=0/0, ticks=2924/0, in_queue=2924, util=96.43% 00:16:20.381 nvme0n4: ios=14971/0, merge=0/0, ticks=2678/0, in_queue=2678, util=96.77% 00:16:20.381 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.381 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:20.640 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.640 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:20.898 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:20.898 18:29:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:21.155 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.155 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:21.721 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:21.721 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 81823 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:21.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:21.978 nvmf hotplug test: fio failed as expected 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:21.978 18:29:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:22.237 rmmod nvme_tcp 00:16:22.237 rmmod nvme_fabrics 00:16:22.237 rmmod nvme_keyring 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 81331 ']' 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 81331 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 81331 ']' 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 81331 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81331 00:16:22.237 killing process with pid 81331 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81331' 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 81331 00:16:22.237 [2024-05-13 18:29:38.112133] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:22.237 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 81331 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:22.496 00:16:22.496 real 0m19.944s 00:16:22.496 user 1m15.489s 00:16:22.496 sys 0m9.545s 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:22.496 ************************************ 00:16:22.496 END TEST nvmf_fio_target 00:16:22.496 ************************************ 00:16:22.496 18:29:38 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 18:29:38 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:22.756 18:29:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:22.756 18:29:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:22.756 18:29:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:22.756 ************************************ 00:16:22.756 START TEST nvmf_bdevio 00:16:22.756 ************************************ 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:22.756 * Looking for test storage... 00:16:22.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:22.756 Cannot find device "nvmf_tgt_br" 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@155 -- # true 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:22.756 Cannot find device "nvmf_tgt_br2" 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@156 -- # true 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:22.756 Cannot find device "nvmf_tgt_br" 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@158 -- # true 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:22.756 Cannot find device "nvmf_tgt_br2" 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@159 -- # true 00:16:22.756 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:23.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:23.014 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:23.014 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:23.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:23.015 00:16:23.015 --- 10.0.0.2 ping statistics --- 00:16:23.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.015 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:23.015 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:23.015 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:23.015 00:16:23.015 --- 10.0.0.3 ping statistics --- 00:16:23.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.015 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:23.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:23.015 00:16:23.015 --- 10.0.0.1 ping statistics --- 00:16:23.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.015 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@433 -- # return 0 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=82200 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 82200 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 82200 ']' 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:23.015 18:29:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:23.272 [2024-05-13 18:29:39.003483] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:23.272 [2024-05-13 18:29:39.003618] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.272 [2024-05-13 18:29:39.149995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.529 [2024-05-13 18:29:39.283265] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.529 [2024-05-13 18:29:39.283543] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.529 [2024-05-13 18:29:39.283748] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.529 [2024-05-13 18:29:39.283902] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.529 [2024-05-13 18:29:39.284011] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.529 [2024-05-13 18:29:39.284378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:23.529 [2024-05-13 18:29:39.284604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:23.529 [2024-05-13 18:29:39.284601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.529 [2024-05-13 18:29:39.284456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:24.091 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:24.091 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:16:24.091 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:24.091 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:24.091 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.350 [2024-05-13 18:29:40.060139] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.350 Malloc0 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:24.350 [2024-05-13 18:29:40.130923] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:24.350 [2024-05-13 18:29:40.131376] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:24.350 { 00:16:24.350 "params": { 00:16:24.350 "name": "Nvme$subsystem", 00:16:24.350 "trtype": "$TEST_TRANSPORT", 00:16:24.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.350 "adrfam": "ipv4", 00:16:24.350 "trsvcid": "$NVMF_PORT", 00:16:24.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.350 "hdgst": ${hdgst:-false}, 00:16:24.350 "ddgst": ${ddgst:-false} 00:16:24.350 }, 00:16:24.350 "method": "bdev_nvme_attach_controller" 00:16:24.350 } 00:16:24.350 EOF 00:16:24.350 )") 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:16:24.350 18:29:40 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:24.350 "params": { 00:16:24.350 "name": "Nvme1", 00:16:24.350 "trtype": "tcp", 00:16:24.350 "traddr": "10.0.0.2", 00:16:24.350 "adrfam": "ipv4", 00:16:24.350 "trsvcid": "4420", 00:16:24.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.350 "hdgst": false, 00:16:24.350 "ddgst": false 00:16:24.350 }, 00:16:24.350 "method": "bdev_nvme_attach_controller" 00:16:24.350 }' 00:16:24.350 [2024-05-13 18:29:40.188418] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:24.350 [2024-05-13 18:29:40.188509] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82255 ] 00:16:24.608 [2024-05-13 18:29:40.321038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:24.608 [2024-05-13 18:29:40.458307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.608 [2024-05-13 18:29:40.458456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.608 [2024-05-13 18:29:40.458463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.868 I/O targets: 00:16:24.869 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:24.869 00:16:24.869 00:16:24.869 CUnit - A unit testing framework for C - Version 2.1-3 00:16:24.869 http://cunit.sourceforge.net/ 00:16:24.869 00:16:24.869 00:16:24.869 Suite: bdevio tests on: Nvme1n1 00:16:24.869 Test: blockdev write read block ...passed 00:16:24.869 Test: blockdev write zeroes read block ...passed 00:16:24.869 Test: blockdev write zeroes read no split ...passed 00:16:24.869 Test: blockdev write zeroes read split ...passed 00:16:24.869 Test: blockdev write zeroes read split partial ...passed 00:16:24.869 Test: blockdev reset ...[2024-05-13 18:29:40.770696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:24.869 [2024-05-13 18:29:40.770996] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc46e0 (9): Bad file descriptor 00:16:24.869 [2024-05-13 18:29:40.787919] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:24.869 passed 00:16:24.869 Test: blockdev write read 8 blocks ...passed 00:16:24.869 Test: blockdev write read size > 128k ...passed 00:16:24.869 Test: blockdev write read invalid size ...passed 00:16:25.128 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:25.128 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:25.128 Test: blockdev write read max offset ...passed 00:16:25.128 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:25.128 Test: blockdev writev readv 8 blocks ...passed 00:16:25.128 Test: blockdev writev readv 30 x 1block ...passed 00:16:25.128 Test: blockdev writev readv block ...passed 00:16:25.128 Test: blockdev writev readv size > 128k ...passed 00:16:25.128 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:25.128 Test: blockdev comparev and writev ...[2024-05-13 18:29:40.964433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.964503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:40.964525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.964537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:40.965059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.965086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:40.965115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.965126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:40.965519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.965536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:40.965553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.965564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:40.966004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.966021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:40.966037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.128 [2024-05-13 18:29:40.966049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:25.128 passed 00:16:25.128 Test: blockdev nvme passthru rw ...passed 00:16:25.128 Test: blockdev nvme passthru vendor specific ...[2024-05-13 18:29:41.049283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.128 passed 00:16:25.128 Test: blockdev nvme admin passthru ...[2024-05-13 18:29:41.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:41.049811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.128 [2024-05-13 18:29:41.049829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:41.049965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.128 [2024-05-13 18:29:41.049981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:25.128 [2024-05-13 18:29:41.050153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.128 [2024-05-13 18:29:41.050169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:25.128 passed 00:16:25.388 Test: blockdev copy ...passed 00:16:25.388 00:16:25.388 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.388 suites 1 1 n/a 0 0 00:16:25.388 tests 23 23 23 0 0 00:16:25.388 asserts 152 152 152 0 n/a 00:16:25.388 00:16:25.388 Elapsed time = 0.905 seconds 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:25.646 rmmod nvme_tcp 00:16:25.646 rmmod nvme_fabrics 00:16:25.646 rmmod nvme_keyring 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 82200 ']' 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 82200 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 82200 ']' 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 82200 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82200 00:16:25.646 killing process with pid 82200 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82200' 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 82200 00:16:25.646 [2024-05-13 18:29:41.477495] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:25.646 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 82200 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:25.905 ************************************ 00:16:25.905 END TEST nvmf_bdevio 00:16:25.905 ************************************ 00:16:25.905 00:16:25.905 real 0m3.337s 00:16:25.905 user 0m12.002s 00:16:25.905 sys 0m0.821s 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:25.905 18:29:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:16:26.164 18:29:41 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:16:26.164 18:29:41 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:26.164 18:29:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:26.164 18:29:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:26.164 18:29:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:26.164 ************************************ 00:16:26.164 START TEST nvmf_bdevio_no_huge 00:16:26.164 ************************************ 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:26.164 * Looking for test storage... 00:16:26.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.164 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.165 18:29:41 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:26.165 Cannot find device "nvmf_tgt_br" 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # true 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.165 Cannot find device "nvmf_tgt_br2" 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # true 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:26.165 Cannot find device "nvmf_tgt_br" 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # true 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:26.165 Cannot find device "nvmf_tgt_br2" 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # true 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:26.165 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:26.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:26.424 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:26.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:16:26.424 00:16:26.424 --- 10.0.0.2 ping statistics --- 00:16:26.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.424 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:26.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:26.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.114 ms 00:16:26.424 00:16:26.424 --- 10.0.0.3 ping statistics --- 00:16:26.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.424 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:26.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:26.424 00:16:26.424 --- 10.0.0.1 ping statistics --- 00:16:26.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.424 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@433 -- # return 0 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:26.424 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=82433 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 82433 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 82433 ']' 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:26.683 18:29:42 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.683 [2024-05-13 18:29:42.443142] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:26.683 [2024-05-13 18:29:42.443239] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:26.683 [2024-05-13 18:29:42.589623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.941 [2024-05-13 18:29:42.714149] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.941 [2024-05-13 18:29:42.714390] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.941 [2024-05-13 18:29:42.714873] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.941 [2024-05-13 18:29:42.715148] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.941 [2024-05-13 18:29:42.715523] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.941 [2024-05-13 18:29:42.715883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:26.941 [2024-05-13 18:29:42.716033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:26.941 [2024-05-13 18:29:42.716139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:26.941 [2024-05-13 18:29:42.716146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.507 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:27.507 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:16:27.507 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:27.507 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:27.507 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.508 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.508 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.508 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.508 [2024-05-13 18:29:43.450213] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.766 Malloc0 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.766 [2024-05-13 18:29:43.490186] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:27.766 [2024-05-13 18:29:43.490501] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:27.766 { 00:16:27.766 "params": { 00:16:27.766 "name": "Nvme$subsystem", 00:16:27.766 "trtype": "$TEST_TRANSPORT", 00:16:27.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.766 "adrfam": "ipv4", 00:16:27.766 "trsvcid": "$NVMF_PORT", 00:16:27.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.766 "hdgst": ${hdgst:-false}, 00:16:27.766 "ddgst": ${ddgst:-false} 00:16:27.766 }, 00:16:27.766 "method": "bdev_nvme_attach_controller" 00:16:27.766 } 00:16:27.766 EOF 00:16:27.766 )") 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:16:27.766 18:29:43 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:27.766 "params": { 00:16:27.766 "name": "Nvme1", 00:16:27.766 "trtype": "tcp", 00:16:27.766 "traddr": "10.0.0.2", 00:16:27.766 "adrfam": "ipv4", 00:16:27.766 "trsvcid": "4420", 00:16:27.766 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.766 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.766 "hdgst": false, 00:16:27.766 "ddgst": false 00:16:27.766 }, 00:16:27.767 "method": "bdev_nvme_attach_controller" 00:16:27.767 }' 00:16:27.767 [2024-05-13 18:29:43.548811] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:27.767 [2024-05-13 18:29:43.548931] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82487 ] 00:16:27.767 [2024-05-13 18:29:43.689822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.024 [2024-05-13 18:29:43.846221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.024 [2024-05-13 18:29:43.846357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.024 [2024-05-13 18:29:43.846365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.334 I/O targets: 00:16:28.334 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:28.334 00:16:28.334 00:16:28.334 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.334 http://cunit.sourceforge.net/ 00:16:28.334 00:16:28.334 00:16:28.334 Suite: bdevio tests on: Nvme1n1 00:16:28.334 Test: blockdev write read block ...passed 00:16:28.334 Test: blockdev write zeroes read block ...passed 00:16:28.334 Test: blockdev write zeroes read no split ...passed 00:16:28.334 Test: blockdev write zeroes read split ...passed 00:16:28.334 Test: blockdev write zeroes read split partial ...passed 00:16:28.334 Test: blockdev reset ...[2024-05-13 18:29:44.187047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:28.334 [2024-05-13 18:29:44.187370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a632f0 (9): Bad file descriptor 00:16:28.334 [2024-05-13 18:29:44.202441] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.334 passed 00:16:28.334 Test: blockdev write read 8 blocks ...passed 00:16:28.334 Test: blockdev write read size > 128k ...passed 00:16:28.334 Test: blockdev write read invalid size ...passed 00:16:28.334 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:28.334 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:28.334 Test: blockdev write read max offset ...passed 00:16:28.592 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:28.592 Test: blockdev writev readv 8 blocks ...passed 00:16:28.592 Test: blockdev writev readv 30 x 1block ...passed 00:16:28.592 Test: blockdev writev readv block ...passed 00:16:28.592 Test: blockdev writev readv size > 128k ...passed 00:16:28.592 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:28.592 Test: blockdev comparev and writev ...[2024-05-13 18:29:44.381032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.381096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.381118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.381130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.381599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.381618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.381635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.381646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.382186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.382220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.382240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.382251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.382657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.382680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.382697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.592 [2024-05-13 18:29:44.382707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:28.592 passed 00:16:28.592 Test: blockdev nvme passthru rw ...passed 00:16:28.592 Test: blockdev nvme passthru vendor specific ...[2024-05-13 18:29:44.466214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.592 [2024-05-13 18:29:44.466497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.466688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.592 [2024-05-13 18:29:44.466707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:28.592 [2024-05-13 18:29:44.466861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.592 [2024-05-13 18:29:44.466878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:28.592 passed 00:16:28.592 Test: blockdev nvme admin passthru ...[2024-05-13 18:29:44.467308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.592 [2024-05-13 18:29:44.467341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:28.592 passed 00:16:28.592 Test: blockdev copy ...passed 00:16:28.592 00:16:28.592 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.592 suites 1 1 n/a 0 0 00:16:28.592 tests 23 23 23 0 0 00:16:28.592 asserts 152 152 152 0 n/a 00:16:28.592 00:16:28.592 Elapsed time = 0.953 seconds 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:29.159 18:29:44 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.159 rmmod nvme_tcp 00:16:29.159 rmmod nvme_fabrics 00:16:29.159 rmmod nvme_keyring 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 82433 ']' 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 82433 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 82433 ']' 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 82433 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:29.159 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82433 00:16:29.418 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:16:29.418 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:16:29.418 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82433' 00:16:29.418 killing process with pid 82433 00:16:29.418 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 82433 00:16:29.418 [2024-05-13 18:29:45.120888] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:29.418 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 82433 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.677 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:29.935 00:16:29.935 real 0m3.765s 00:16:29.935 user 0m13.360s 00:16:29.935 sys 0m1.402s 00:16:29.935 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:29.935 18:29:45 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:29.935 ************************************ 00:16:29.935 END TEST nvmf_bdevio_no_huge 00:16:29.935 ************************************ 00:16:29.935 18:29:45 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:29.935 18:29:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:29.935 18:29:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:29.935 18:29:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:29.935 ************************************ 00:16:29.935 START TEST nvmf_tls 00:16:29.935 ************************************ 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:29.935 * Looking for test storage... 00:16:29.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.935 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@432 -- # nvmf_veth_init 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:29.936 Cannot find device "nvmf_tgt_br" 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@155 -- # true 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.936 Cannot find device "nvmf_tgt_br2" 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@156 -- # true 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:29.936 Cannot find device "nvmf_tgt_br" 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@158 -- # true 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:29.936 Cannot find device "nvmf_tgt_br2" 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@159 -- # true 00:16:29.936 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.194 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.194 18:29:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:30.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:30.194 00:16:30.194 --- 10.0.0.2 ping statistics --- 00:16:30.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.194 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:30.194 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.194 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.106 ms 00:16:30.194 00:16:30.194 --- 10.0.0.3 ping statistics --- 00:16:30.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.194 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:16:30.194 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:30.452 00:16:30.452 --- 10.0.0.1 ping statistics --- 00:16:30.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.452 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@433 -- # return 0 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:30.452 18:29:46 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=82673 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 82673 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 82673 ']' 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:30.453 18:29:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.453 [2024-05-13 18:29:46.230134] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:30.453 [2024-05-13 18:29:46.230239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.453 [2024-05-13 18:29:46.393693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.711 [2024-05-13 18:29:46.519091] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.711 [2024-05-13 18:29:46.519160] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.711 [2024-05-13 18:29:46.519172] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.711 [2024-05-13 18:29:46.519181] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.711 [2024-05-13 18:29:46.519188] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.711 [2024-05-13 18:29:46.519214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.279 18:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:31.279 18:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:31.279 18:29:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:31.279 18:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:31.279 18:29:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.537 18:29:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.537 18:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:16:31.537 18:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:31.854 true 00:16:31.854 18:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:16:31.854 18:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.112 18:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:16:32.112 18:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:16:32.112 18:29:47 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:32.374 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.374 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:16:32.632 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:16:32.632 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:16:32.632 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:32.891 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.891 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:16:33.148 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:16:33.148 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:16:33.149 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.149 18:29:48 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:16:33.408 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:16:33.408 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:16:33.408 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:33.667 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:16:33.667 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.926 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:16:33.926 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:16:33.926 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:34.185 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.185 18:29:49 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.hP1556dFj3 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.aUJf53slrR 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.hP1556dFj3 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.aUJf53slrR 00:16:34.444 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:34.703 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:35.269 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.hP1556dFj3 00:16:35.269 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.hP1556dFj3 00:16:35.269 18:29:50 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:35.269 [2024-05-13 18:29:51.131857] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.269 18:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:35.527 18:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:35.785 [2024-05-13 18:29:51.631887] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:35.785 [2024-05-13 18:29:51.632027] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:35.785 [2024-05-13 18:29:51.632283] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.785 18:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:36.044 malloc0 00:16:36.044 18:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:36.302 18:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hP1556dFj3 00:16:36.560 [2024-05-13 18:29:52.344049] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:16:36.560 18:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.hP1556dFj3 00:16:48.785 Initializing NVMe Controllers 00:16:48.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:48.785 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:48.785 Initialization complete. Launching workers. 00:16:48.785 ======================================================== 00:16:48.785 Latency(us) 00:16:48.785 Device Information : IOPS MiB/s Average min max 00:16:48.785 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7759.59 30.31 8249.88 1416.30 13961.85 00:16:48.785 ======================================================== 00:16:48.785 Total : 7759.59 30.31 8249.88 1416.30 13961.85 00:16:48.785 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.hP1556dFj3 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hP1556dFj3' 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83034 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83034 /var/tmp/bdevperf.sock 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83034 ']' 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:48.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:48.785 18:30:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:48.785 [2024-05-13 18:30:02.604730] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:48.785 [2024-05-13 18:30:02.604844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83034 ] 00:16:48.785 [2024-05-13 18:30:02.741311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.785 [2024-05-13 18:30:02.875957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.785 18:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:48.785 18:30:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:48.785 18:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hP1556dFj3 00:16:48.785 [2024-05-13 18:30:03.855596] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.785 [2024-05-13 18:30:03.855721] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:48.785 TLSTESTn1 00:16:48.785 18:30:03 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:48.785 Running I/O for 10 seconds... 00:16:58.755 00:16:58.755 Latency(us) 00:16:58.755 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.755 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:58.755 Verification LBA range: start 0x0 length 0x2000 00:16:58.755 TLSTESTn1 : 10.02 3870.52 15.12 0.00 0.00 33004.21 7387.69 19779.96 00:16:58.755 =================================================================================================================== 00:16:58.755 Total : 3870.52 15.12 0.00 0.00 33004.21 7387.69 19779.96 00:16:58.755 0 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83034 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83034 ']' 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83034 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83034 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:58.755 killing process with pid 83034 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83034' 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83034 00:16:58.755 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83034 00:16:58.755 Received shutdown signal, test time was about 10.000000 seconds 00:16:58.756 00:16:58.756 Latency(us) 00:16:58.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.756 =================================================================================================================== 00:16:58.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.756 [2024-05-13 18:30:14.163186] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aUJf53slrR 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aUJf53slrR 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aUJf53slrR 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aUJf53slrR' 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83176 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83176 /var/tmp/bdevperf.sock 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83176 ']' 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:58.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:58.756 18:30:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:58.756 [2024-05-13 18:30:14.488904] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:16:58.756 [2024-05-13 18:30:14.489010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83176 ] 00:16:58.756 [2024-05-13 18:30:14.625119] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.013 [2024-05-13 18:30:14.744782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:59.578 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:59.578 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:16:59.578 18:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aUJf53slrR 00:16:59.835 [2024-05-13 18:30:15.683856] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:59.835 [2024-05-13 18:30:15.683979] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:16:59.835 [2024-05-13 18:30:15.690217] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:16:59.835 [2024-05-13 18:30:15.690840] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4c970 (107): Transport endpoint is not connected 00:16:59.835 [2024-05-13 18:30:15.691825] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c4c970 (9): Bad file descriptor 00:16:59.835 [2024-05-13 18:30:15.692822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:16:59.835 [2024-05-13 18:30:15.692851] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:16:59.835 [2024-05-13 18:30:15.692862] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:59.835 2024/05/13 18:30:15 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.aUJf53slrR subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:16:59.835 request: 00:16:59.835 { 00:16:59.835 "method": "bdev_nvme_attach_controller", 00:16:59.835 "params": { 00:16:59.835 "name": "TLSTEST", 00:16:59.835 "trtype": "tcp", 00:16:59.835 "traddr": "10.0.0.2", 00:16:59.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:59.835 "adrfam": "ipv4", 00:16:59.835 "trsvcid": "4420", 00:16:59.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:59.835 "psk": "/tmp/tmp.aUJf53slrR" 00:16:59.835 } 00:16:59.835 } 00:16:59.835 Got JSON-RPC error response 00:16:59.835 GoRPCClient: error on JSON-RPC call 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83176 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83176 ']' 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83176 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83176 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:16:59.835 killing process with pid 83176 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83176' 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83176 00:16:59.835 18:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83176 00:16:59.835 Received shutdown signal, test time was about 10.000000 seconds 00:16:59.835 00:16:59.835 Latency(us) 00:16:59.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.835 =================================================================================================================== 00:16:59.835 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:59.835 [2024-05-13 18:30:15.744617] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:00.093 18:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hP1556dFj3 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hP1556dFj3 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.hP1556dFj3 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hP1556dFj3' 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83227 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83227 /var/tmp/bdevperf.sock 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83227 ']' 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:00.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:00.093 18:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:00.351 [2024-05-13 18:30:16.063965] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:00.351 [2024-05-13 18:30:16.064070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83227 ] 00:17:00.351 [2024-05-13 18:30:16.199471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.608 [2024-05-13 18:30:16.325918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.174 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:01.174 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:01.174 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.hP1556dFj3 00:17:01.432 [2024-05-13 18:30:17.297613] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.432 [2024-05-13 18:30:17.297727] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:01.432 [2024-05-13 18:30:17.302706] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:01.432 [2024-05-13 18:30:17.302748] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:01.432 [2024-05-13 18:30:17.302809] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:01.432 [2024-05-13 18:30:17.303411] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb5970 (107): Transport endpoint is not connected 00:17:01.432 [2024-05-13 18:30:17.304396] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfb5970 (9): Bad file descriptor 00:17:01.432 [2024-05-13 18:30:17.305391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:01.432 [2024-05-13 18:30:17.305417] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:01.432 [2024-05-13 18:30:17.305427] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:01.432 2024/05/13 18:30:17 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.hP1556dFj3 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:01.432 request: 00:17:01.432 { 00:17:01.432 "method": "bdev_nvme_attach_controller", 00:17:01.432 "params": { 00:17:01.432 "name": "TLSTEST", 00:17:01.432 "trtype": "tcp", 00:17:01.432 "traddr": "10.0.0.2", 00:17:01.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:01.432 "adrfam": "ipv4", 00:17:01.432 "trsvcid": "4420", 00:17:01.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.432 "psk": "/tmp/tmp.hP1556dFj3" 00:17:01.432 } 00:17:01.432 } 00:17:01.432 Got JSON-RPC error response 00:17:01.432 GoRPCClient: error on JSON-RPC call 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83227 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83227 ']' 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83227 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83227 00:17:01.432 killing process with pid 83227 00:17:01.432 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.432 00:17:01.432 Latency(us) 00:17:01.432 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.432 =================================================================================================================== 00:17:01.432 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83227' 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83227 00:17:01.432 [2024-05-13 18:30:17.350176] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:01.432 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83227 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hP1556dFj3 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hP1556dFj3 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.hP1556dFj3 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.hP1556dFj3' 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83267 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83267 /var/tmp/bdevperf.sock 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83267 ']' 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.690 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:01.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.691 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.691 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:01.691 18:30:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.948 [2024-05-13 18:30:17.667036] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:01.948 [2024-05-13 18:30:17.667126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83267 ] 00:17:01.948 [2024-05-13 18:30:17.802746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.205 [2024-05-13 18:30:17.924759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.770 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.770 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:02.770 18:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hP1556dFj3 00:17:03.028 [2024-05-13 18:30:18.913253] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.028 [2024-05-13 18:30:18.913386] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:03.028 [2024-05-13 18:30:18.920565] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:03.028 [2024-05-13 18:30:18.920619] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:03.028 [2024-05-13 18:30:18.920689] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:03.028 [2024-05-13 18:30:18.921462] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc970 (107): Transport endpoint is not connected 00:17:03.028 [2024-05-13 18:30:18.922449] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1adc970 (9): Bad file descriptor 00:17:03.028 [2024-05-13 18:30:18.923444] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:03.028 [2024-05-13 18:30:18.923490] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:03.028 [2024-05-13 18:30:18.923500] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:03.028 2024/05/13 18:30:18 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.hP1556dFj3 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:03.028 request: 00:17:03.028 { 00:17:03.028 "method": "bdev_nvme_attach_controller", 00:17:03.028 "params": { 00:17:03.028 "name": "TLSTEST", 00:17:03.028 "trtype": "tcp", 00:17:03.028 "traddr": "10.0.0.2", 00:17:03.028 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.028 "adrfam": "ipv4", 00:17:03.028 "trsvcid": "4420", 00:17:03.028 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:03.028 "psk": "/tmp/tmp.hP1556dFj3" 00:17:03.028 } 00:17:03.028 } 00:17:03.028 Got JSON-RPC error response 00:17:03.028 GoRPCClient: error on JSON-RPC call 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83267 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83267 ']' 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83267 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83267 00:17:03.028 killing process with pid 83267 00:17:03.028 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.028 00:17:03.028 Latency(us) 00:17:03.028 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.028 =================================================================================================================== 00:17:03.028 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83267' 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83267 00:17:03.028 [2024-05-13 18:30:18.967077] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:03.028 18:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83267 00:17:03.285 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:03.285 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:03.285 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:03.285 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:03.285 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:03.285 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:03.285 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:03.286 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:03.286 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:03.286 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.286 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:03.286 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.286 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83313 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83313 /var/tmp/bdevperf.sock 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83313 ']' 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:03.544 18:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.544 [2024-05-13 18:30:19.290361] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:03.544 [2024-05-13 18:30:19.290485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83313 ] 00:17:03.544 [2024-05-13 18:30:19.429764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.840 [2024-05-13 18:30:19.550037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.405 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:04.405 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:04.405 18:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:04.663 [2024-05-13 18:30:20.490872] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:04.663 [2024-05-13 18:30:20.492560] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1213490 (9): Bad file descriptor 00:17:04.663 [2024-05-13 18:30:20.493668] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:04.663 [2024-05-13 18:30:20.493699] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:04.663 [2024-05-13 18:30:20.493710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:04.663 2024/05/13 18:30:20 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:04.663 request: 00:17:04.663 { 00:17:04.663 "method": "bdev_nvme_attach_controller", 00:17:04.663 "params": { 00:17:04.663 "name": "TLSTEST", 00:17:04.663 "trtype": "tcp", 00:17:04.663 "traddr": "10.0.0.2", 00:17:04.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.663 "adrfam": "ipv4", 00:17:04.663 "trsvcid": "4420", 00:17:04.663 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:04.663 } 00:17:04.663 } 00:17:04.663 Got JSON-RPC error response 00:17:04.663 GoRPCClient: error on JSON-RPC call 00:17:04.663 18:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83313 00:17:04.663 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83313 ']' 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83313 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83313 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:04.664 killing process with pid 83313 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83313' 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83313 00:17:04.664 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83313 00:17:04.664 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.664 00:17:04.664 Latency(us) 00:17:04.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.664 =================================================================================================================== 00:17:04.664 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 82673 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 82673 ']' 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 82673 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 82673 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:04.922 killing process with pid 82673 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 82673' 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 82673 00:17:04.922 [2024-05-13 18:30:20.812012] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:04.922 [2024-05-13 18:30:20.812055] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:04.922 18:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 82673 00:17:05.180 18:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:05.180 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:05.180 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:05.180 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:05.180 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:05.180 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:05.180 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.krxck21OLc 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.krxck21OLc 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83374 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83374 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83374 ']' 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:05.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:05.438 18:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.438 [2024-05-13 18:30:21.223826] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:05.438 [2024-05-13 18:30:21.223937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.438 [2024-05-13 18:30:21.362296] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.695 [2024-05-13 18:30:21.484355] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.695 [2024-05-13 18:30:21.484445] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.695 [2024-05-13 18:30:21.484474] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:05.695 [2024-05-13 18:30:21.484483] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:05.695 [2024-05-13 18:30:21.484490] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.695 [2024-05-13 18:30:21.484517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.krxck21OLc 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.krxck21OLc 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:06.627 [2024-05-13 18:30:22.493951] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.627 18:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:06.885 18:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:07.143 [2024-05-13 18:30:22.974031] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:07.143 [2024-05-13 18:30:22.974162] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.143 [2024-05-13 18:30:22.974346] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.143 18:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:07.401 malloc0 00:17:07.401 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:07.659 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krxck21OLc 00:17:07.917 [2024-05-13 18:30:23.794271] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.krxck21OLc 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.krxck21OLc' 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83477 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83477 /var/tmp/bdevperf.sock 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83477 ']' 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:07.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:07.917 18:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.176 [2024-05-13 18:30:23.880227] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:08.177 [2024-05-13 18:30:23.880337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83477 ] 00:17:08.177 [2024-05-13 18:30:24.018122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.435 [2024-05-13 18:30:24.130317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.002 18:30:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.002 18:30:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:09.002 18:30:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krxck21OLc 00:17:09.261 [2024-05-13 18:30:25.127271] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.261 [2024-05-13 18:30:25.127390] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:09.261 TLSTESTn1 00:17:09.519 18:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:09.519 Running I/O for 10 seconds... 00:17:19.576 00:17:19.576 Latency(us) 00:17:19.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.576 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.576 Verification LBA range: start 0x0 length 0x2000 00:17:19.576 TLSTESTn1 : 10.02 3957.50 15.46 0.00 0.00 32279.51 7417.48 25141.99 00:17:19.576 =================================================================================================================== 00:17:19.576 Total : 3957.50 15.46 0.00 0.00 32279.51 7417.48 25141.99 00:17:19.576 0 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 83477 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83477 ']' 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83477 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83477 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:19.576 killing process with pid 83477 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83477' 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83477 00:17:19.576 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.576 00:17:19.576 Latency(us) 00:17:19.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.576 =================================================================================================================== 00:17:19.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.576 [2024-05-13 18:30:35.408329] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:19.576 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83477 00:17:19.835 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.krxck21OLc 00:17:19.835 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.krxck21OLc 00:17:19.835 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:19.835 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.krxck21OLc 00:17:19.835 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.krxck21OLc 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.krxck21OLc' 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83624 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83624 /var/tmp/bdevperf.sock 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83624 ']' 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:19.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:19.836 18:30:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:19.836 [2024-05-13 18:30:35.746547] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:19.836 [2024-05-13 18:30:35.746705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83624 ] 00:17:20.094 [2024-05-13 18:30:35.891073] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.094 [2024-05-13 18:30:36.011163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.029 18:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:21.029 18:30:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:21.029 18:30:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krxck21OLc 00:17:21.287 [2024-05-13 18:30:36.992356] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.287 [2024-05-13 18:30:36.992456] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:21.287 [2024-05-13 18:30:36.992467] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.krxck21OLc 00:17:21.287 2024/05/13 18:30:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.krxck21OLc subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:17:21.287 request: 00:17:21.287 { 00:17:21.287 "method": "bdev_nvme_attach_controller", 00:17:21.287 "params": { 00:17:21.287 "name": "TLSTEST", 00:17:21.287 "trtype": "tcp", 00:17:21.287 "traddr": "10.0.0.2", 00:17:21.287 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.287 "adrfam": "ipv4", 00:17:21.287 "trsvcid": "4420", 00:17:21.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.287 "psk": "/tmp/tmp.krxck21OLc" 00:17:21.287 } 00:17:21.287 } 00:17:21.287 Got JSON-RPC error response 00:17:21.287 GoRPCClient: error on JSON-RPC call 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 83624 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83624 ']' 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83624 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83624 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:21.287 killing process with pid 83624 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83624' 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83624 00:17:21.287 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83624 00:17:21.287 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.287 00:17:21.287 Latency(us) 00:17:21.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.287 =================================================================================================================== 00:17:21.287 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 83374 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83374 ']' 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83374 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83374 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83374' 00:17:21.545 killing process with pid 83374 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83374 00:17:21.545 [2024-05-13 18:30:37.324449] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:21.545 [2024-05-13 18:30:37.324491] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:21.545 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83374 00:17:21.803 18:30:37 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:21.803 18:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:21.803 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:21.803 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83680 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83680 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83680 ']' 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:21.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:21.804 18:30:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:21.804 [2024-05-13 18:30:37.672715] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:21.804 [2024-05-13 18:30:37.672854] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.062 [2024-05-13 18:30:37.809213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.062 [2024-05-13 18:30:37.926041] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.062 [2024-05-13 18:30:37.926111] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.062 [2024-05-13 18:30:37.926122] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.062 [2024-05-13 18:30:37.926129] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.062 [2024-05-13 18:30:37.926136] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.062 [2024-05-13 18:30:37.926161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.krxck21OLc 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.krxck21OLc 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.krxck21OLc 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.krxck21OLc 00:17:23.032 18:30:38 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.301 [2024-05-13 18:30:39.023416] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.301 18:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.560 18:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:23.818 [2024-05-13 18:30:39.579552] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:23.818 [2024-05-13 18:30:39.579689] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.818 [2024-05-13 18:30:39.579881] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.818 18:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:24.076 malloc0 00:17:24.076 18:30:39 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.334 18:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krxck21OLc 00:17:24.593 [2024-05-13 18:30:40.371361] tcp.c:3567:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:24.593 [2024-05-13 18:30:40.371421] tcp.c:3653:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:24.593 [2024-05-13 18:30:40.371469] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:24.593 2024/05/13 18:30:40 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.krxck21OLc], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:24.593 request: 00:17:24.593 { 00:17:24.593 "method": "nvmf_subsystem_add_host", 00:17:24.593 "params": { 00:17:24.593 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.593 "host": "nqn.2016-06.io.spdk:host1", 00:17:24.593 "psk": "/tmp/tmp.krxck21OLc" 00:17:24.593 } 00:17:24.593 } 00:17:24.593 Got JSON-RPC error response 00:17:24.593 GoRPCClient: error on JSON-RPC call 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 83680 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83680 ']' 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83680 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83680 00:17:24.593 killing process with pid 83680 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83680' 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83680 00:17:24.593 [2024-05-13 18:30:40.419398] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:24.593 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83680 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.krxck21OLc 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83791 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83791 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83791 ']' 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:24.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:24.851 18:30:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:24.851 [2024-05-13 18:30:40.755140] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:24.851 [2024-05-13 18:30:40.755237] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.111 [2024-05-13 18:30:40.886024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.111 [2024-05-13 18:30:41.002968] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.111 [2024-05-13 18:30:41.003021] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.111 [2024-05-13 18:30:41.003048] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:25.111 [2024-05-13 18:30:41.003057] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:25.111 [2024-05-13 18:30:41.003064] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.111 [2024-05-13 18:30:41.003089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.krxck21OLc 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.krxck21OLc 00:17:26.047 18:30:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:26.344 [2024-05-13 18:30:42.012710] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.344 18:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.607 18:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.866 [2024-05-13 18:30:42.572804] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:26.866 [2024-05-13 18:30:42.572920] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.866 [2024-05-13 18:30:42.573111] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.866 18:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:27.125 malloc0 00:17:27.125 18:30:42 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.384 18:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krxck21OLc 00:17:27.642 [2024-05-13 18:30:43.376848] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:27.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=83898 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 83898 /var/tmp/bdevperf.sock 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83898 ']' 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:27.642 18:30:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:27.642 [2024-05-13 18:30:43.450622] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:27.642 [2024-05-13 18:30:43.450727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83898 ] 00:17:27.900 [2024-05-13 18:30:43.589035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.900 [2024-05-13 18:30:43.720247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.832 18:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:28.832 18:30:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:28.832 18:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krxck21OLc 00:17:28.832 [2024-05-13 18:30:44.692689] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.832 [2024-05-13 18:30:44.692813] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:28.832 TLSTESTn1 00:17:29.090 18:30:44 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:29.349 18:30:45 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:17:29.349 "subsystems": [ 00:17:29.349 { 00:17:29.349 "subsystem": "keyring", 00:17:29.349 "config": [] 00:17:29.349 }, 00:17:29.349 { 00:17:29.350 "subsystem": "iobuf", 00:17:29.350 "config": [ 00:17:29.350 { 00:17:29.350 "method": "iobuf_set_options", 00:17:29.350 "params": { 00:17:29.350 "large_bufsize": 135168, 00:17:29.350 "large_pool_count": 1024, 00:17:29.350 "small_bufsize": 8192, 00:17:29.350 "small_pool_count": 8192 00:17:29.350 } 00:17:29.350 } 00:17:29.350 ] 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "subsystem": "sock", 00:17:29.350 "config": [ 00:17:29.350 { 00:17:29.350 "method": "sock_impl_set_options", 00:17:29.350 "params": { 00:17:29.350 "enable_ktls": false, 00:17:29.350 "enable_placement_id": 0, 00:17:29.350 "enable_quickack": false, 00:17:29.350 "enable_recv_pipe": true, 00:17:29.350 "enable_zerocopy_send_client": false, 00:17:29.350 "enable_zerocopy_send_server": true, 00:17:29.350 "impl_name": "posix", 00:17:29.350 "recv_buf_size": 2097152, 00:17:29.350 "send_buf_size": 2097152, 00:17:29.350 "tls_version": 0, 00:17:29.350 "zerocopy_threshold": 0 00:17:29.350 } 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "method": "sock_impl_set_options", 00:17:29.350 "params": { 00:17:29.350 "enable_ktls": false, 00:17:29.350 "enable_placement_id": 0, 00:17:29.350 "enable_quickack": false, 00:17:29.350 "enable_recv_pipe": true, 00:17:29.350 "enable_zerocopy_send_client": false, 00:17:29.350 "enable_zerocopy_send_server": true, 00:17:29.350 "impl_name": "ssl", 00:17:29.350 "recv_buf_size": 4096, 00:17:29.350 "send_buf_size": 4096, 00:17:29.350 "tls_version": 0, 00:17:29.350 "zerocopy_threshold": 0 00:17:29.350 } 00:17:29.350 } 00:17:29.350 ] 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "subsystem": "vmd", 00:17:29.350 "config": [] 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "subsystem": "accel", 00:17:29.350 "config": [ 00:17:29.350 { 00:17:29.350 "method": "accel_set_options", 00:17:29.350 "params": { 00:17:29.350 "buf_count": 2048, 00:17:29.350 "large_cache_size": 16, 00:17:29.350 "sequence_count": 2048, 00:17:29.350 "small_cache_size": 128, 00:17:29.350 "task_count": 2048 00:17:29.350 } 00:17:29.350 } 00:17:29.350 ] 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "subsystem": "bdev", 00:17:29.350 "config": [ 00:17:29.350 { 00:17:29.350 "method": "bdev_set_options", 00:17:29.350 "params": { 00:17:29.350 "bdev_auto_examine": true, 00:17:29.350 "bdev_io_cache_size": 256, 00:17:29.350 "bdev_io_pool_size": 65535, 00:17:29.350 "iobuf_large_cache_size": 16, 00:17:29.350 "iobuf_small_cache_size": 128 00:17:29.350 } 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "method": "bdev_raid_set_options", 00:17:29.350 "params": { 00:17:29.350 "process_window_size_kb": 1024 00:17:29.350 } 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "method": "bdev_iscsi_set_options", 00:17:29.350 "params": { 00:17:29.350 "timeout_sec": 30 00:17:29.350 } 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "method": "bdev_nvme_set_options", 00:17:29.350 "params": { 00:17:29.350 "action_on_timeout": "none", 00:17:29.350 "allow_accel_sequence": false, 00:17:29.350 "arbitration_burst": 0, 00:17:29.350 "bdev_retry_count": 3, 00:17:29.350 "ctrlr_loss_timeout_sec": 0, 00:17:29.350 "delay_cmd_submit": true, 00:17:29.350 "dhchap_dhgroups": [ 00:17:29.350 "null", 00:17:29.350 "ffdhe2048", 00:17:29.350 "ffdhe3072", 00:17:29.350 "ffdhe4096", 00:17:29.350 "ffdhe6144", 00:17:29.350 "ffdhe8192" 00:17:29.350 ], 00:17:29.350 "dhchap_digests": [ 00:17:29.350 "sha256", 00:17:29.350 "sha384", 00:17:29.350 "sha512" 00:17:29.350 ], 00:17:29.350 "disable_auto_failback": false, 00:17:29.350 "fast_io_fail_timeout_sec": 0, 00:17:29.350 "generate_uuids": false, 00:17:29.350 "high_priority_weight": 0, 00:17:29.350 "io_path_stat": false, 00:17:29.350 "io_queue_requests": 0, 00:17:29.350 "keep_alive_timeout_ms": 10000, 00:17:29.350 "low_priority_weight": 0, 00:17:29.350 "medium_priority_weight": 0, 00:17:29.350 "nvme_adminq_poll_period_us": 10000, 00:17:29.350 "nvme_error_stat": false, 00:17:29.350 "nvme_ioq_poll_period_us": 0, 00:17:29.350 "rdma_cm_event_timeout_ms": 0, 00:17:29.350 "rdma_max_cq_size": 0, 00:17:29.350 "rdma_srq_size": 0, 00:17:29.350 "reconnect_delay_sec": 0, 00:17:29.350 "timeout_admin_us": 0, 00:17:29.350 "timeout_us": 0, 00:17:29.350 "transport_ack_timeout": 0, 00:17:29.350 "transport_retry_count": 4, 00:17:29.350 "transport_tos": 0 00:17:29.350 } 00:17:29.350 }, 00:17:29.350 { 00:17:29.350 "method": "bdev_nvme_set_hotplug", 00:17:29.350 "params": { 00:17:29.350 "enable": false, 00:17:29.351 "period_us": 100000 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "bdev_malloc_create", 00:17:29.351 "params": { 00:17:29.351 "block_size": 4096, 00:17:29.351 "name": "malloc0", 00:17:29.351 "num_blocks": 8192, 00:17:29.351 "optimal_io_boundary": 0, 00:17:29.351 "physical_block_size": 4096, 00:17:29.351 "uuid": "ddaf231c-5cbe-45e8-9bed-5332f186f8cd" 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "bdev_wait_for_examine" 00:17:29.351 } 00:17:29.351 ] 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "subsystem": "nbd", 00:17:29.351 "config": [] 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "subsystem": "scheduler", 00:17:29.351 "config": [ 00:17:29.351 { 00:17:29.351 "method": "framework_set_scheduler", 00:17:29.351 "params": { 00:17:29.351 "name": "static" 00:17:29.351 } 00:17:29.351 } 00:17:29.351 ] 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "subsystem": "nvmf", 00:17:29.351 "config": [ 00:17:29.351 { 00:17:29.351 "method": "nvmf_set_config", 00:17:29.351 "params": { 00:17:29.351 "admin_cmd_passthru": { 00:17:29.351 "identify_ctrlr": false 00:17:29.351 }, 00:17:29.351 "discovery_filter": "match_any" 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "nvmf_set_max_subsystems", 00:17:29.351 "params": { 00:17:29.351 "max_subsystems": 1024 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "nvmf_set_crdt", 00:17:29.351 "params": { 00:17:29.351 "crdt1": 0, 00:17:29.351 "crdt2": 0, 00:17:29.351 "crdt3": 0 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "nvmf_create_transport", 00:17:29.351 "params": { 00:17:29.351 "abort_timeout_sec": 1, 00:17:29.351 "ack_timeout": 0, 00:17:29.351 "buf_cache_size": 4294967295, 00:17:29.351 "c2h_success": false, 00:17:29.351 "data_wr_pool_size": 0, 00:17:29.351 "dif_insert_or_strip": false, 00:17:29.351 "in_capsule_data_size": 4096, 00:17:29.351 "io_unit_size": 131072, 00:17:29.351 "max_aq_depth": 128, 00:17:29.351 "max_io_qpairs_per_ctrlr": 127, 00:17:29.351 "max_io_size": 131072, 00:17:29.351 "max_queue_depth": 128, 00:17:29.351 "num_shared_buffers": 511, 00:17:29.351 "sock_priority": 0, 00:17:29.351 "trtype": "TCP", 00:17:29.351 "zcopy": false 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "nvmf_create_subsystem", 00:17:29.351 "params": { 00:17:29.351 "allow_any_host": false, 00:17:29.351 "ana_reporting": false, 00:17:29.351 "max_cntlid": 65519, 00:17:29.351 "max_namespaces": 10, 00:17:29.351 "min_cntlid": 1, 00:17:29.351 "model_number": "SPDK bdev Controller", 00:17:29.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.351 "serial_number": "SPDK00000000000001" 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "nvmf_subsystem_add_host", 00:17:29.351 "params": { 00:17:29.351 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.351 "psk": "/tmp/tmp.krxck21OLc" 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "nvmf_subsystem_add_ns", 00:17:29.351 "params": { 00:17:29.351 "namespace": { 00:17:29.351 "bdev_name": "malloc0", 00:17:29.351 "nguid": "DDAF231C5CBE45E89BED5332F186F8CD", 00:17:29.351 "no_auto_visible": false, 00:17:29.351 "nsid": 1, 00:17:29.351 "uuid": "ddaf231c-5cbe-45e8-9bed-5332f186f8cd" 00:17:29.351 }, 00:17:29.351 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:29.351 } 00:17:29.351 }, 00:17:29.351 { 00:17:29.351 "method": "nvmf_subsystem_add_listener", 00:17:29.351 "params": { 00:17:29.351 "listen_address": { 00:17:29.351 "adrfam": "IPv4", 00:17:29.351 "traddr": "10.0.0.2", 00:17:29.351 "trsvcid": "4420", 00:17:29.351 "trtype": "TCP" 00:17:29.351 }, 00:17:29.351 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.351 "secure_channel": true 00:17:29.351 } 00:17:29.351 } 00:17:29.351 ] 00:17:29.351 } 00:17:29.351 ] 00:17:29.351 }' 00:17:29.351 18:30:45 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:29.610 18:30:45 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:17:29.610 "subsystems": [ 00:17:29.610 { 00:17:29.610 "subsystem": "keyring", 00:17:29.610 "config": [] 00:17:29.610 }, 00:17:29.610 { 00:17:29.610 "subsystem": "iobuf", 00:17:29.610 "config": [ 00:17:29.610 { 00:17:29.610 "method": "iobuf_set_options", 00:17:29.610 "params": { 00:17:29.610 "large_bufsize": 135168, 00:17:29.610 "large_pool_count": 1024, 00:17:29.610 "small_bufsize": 8192, 00:17:29.610 "small_pool_count": 8192 00:17:29.610 } 00:17:29.610 } 00:17:29.610 ] 00:17:29.610 }, 00:17:29.610 { 00:17:29.610 "subsystem": "sock", 00:17:29.610 "config": [ 00:17:29.610 { 00:17:29.610 "method": "sock_impl_set_options", 00:17:29.610 "params": { 00:17:29.610 "enable_ktls": false, 00:17:29.610 "enable_placement_id": 0, 00:17:29.610 "enable_quickack": false, 00:17:29.610 "enable_recv_pipe": true, 00:17:29.610 "enable_zerocopy_send_client": false, 00:17:29.610 "enable_zerocopy_send_server": true, 00:17:29.610 "impl_name": "posix", 00:17:29.610 "recv_buf_size": 2097152, 00:17:29.610 "send_buf_size": 2097152, 00:17:29.610 "tls_version": 0, 00:17:29.610 "zerocopy_threshold": 0 00:17:29.610 } 00:17:29.610 }, 00:17:29.610 { 00:17:29.610 "method": "sock_impl_set_options", 00:17:29.610 "params": { 00:17:29.610 "enable_ktls": false, 00:17:29.610 "enable_placement_id": 0, 00:17:29.610 "enable_quickack": false, 00:17:29.610 "enable_recv_pipe": true, 00:17:29.610 "enable_zerocopy_send_client": false, 00:17:29.610 "enable_zerocopy_send_server": true, 00:17:29.610 "impl_name": "ssl", 00:17:29.610 "recv_buf_size": 4096, 00:17:29.611 "send_buf_size": 4096, 00:17:29.611 "tls_version": 0, 00:17:29.611 "zerocopy_threshold": 0 00:17:29.611 } 00:17:29.611 } 00:17:29.611 ] 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "subsystem": "vmd", 00:17:29.611 "config": [] 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "subsystem": "accel", 00:17:29.611 "config": [ 00:17:29.611 { 00:17:29.611 "method": "accel_set_options", 00:17:29.611 "params": { 00:17:29.611 "buf_count": 2048, 00:17:29.611 "large_cache_size": 16, 00:17:29.611 "sequence_count": 2048, 00:17:29.611 "small_cache_size": 128, 00:17:29.611 "task_count": 2048 00:17:29.611 } 00:17:29.611 } 00:17:29.611 ] 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "subsystem": "bdev", 00:17:29.611 "config": [ 00:17:29.611 { 00:17:29.611 "method": "bdev_set_options", 00:17:29.611 "params": { 00:17:29.611 "bdev_auto_examine": true, 00:17:29.611 "bdev_io_cache_size": 256, 00:17:29.611 "bdev_io_pool_size": 65535, 00:17:29.611 "iobuf_large_cache_size": 16, 00:17:29.611 "iobuf_small_cache_size": 128 00:17:29.611 } 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "method": "bdev_raid_set_options", 00:17:29.611 "params": { 00:17:29.611 "process_window_size_kb": 1024 00:17:29.611 } 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "method": "bdev_iscsi_set_options", 00:17:29.611 "params": { 00:17:29.611 "timeout_sec": 30 00:17:29.611 } 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "method": "bdev_nvme_set_options", 00:17:29.611 "params": { 00:17:29.611 "action_on_timeout": "none", 00:17:29.611 "allow_accel_sequence": false, 00:17:29.611 "arbitration_burst": 0, 00:17:29.611 "bdev_retry_count": 3, 00:17:29.611 "ctrlr_loss_timeout_sec": 0, 00:17:29.611 "delay_cmd_submit": true, 00:17:29.611 "dhchap_dhgroups": [ 00:17:29.611 "null", 00:17:29.611 "ffdhe2048", 00:17:29.611 "ffdhe3072", 00:17:29.611 "ffdhe4096", 00:17:29.611 "ffdhe6144", 00:17:29.611 "ffdhe8192" 00:17:29.611 ], 00:17:29.611 "dhchap_digests": [ 00:17:29.611 "sha256", 00:17:29.611 "sha384", 00:17:29.611 "sha512" 00:17:29.611 ], 00:17:29.611 "disable_auto_failback": false, 00:17:29.611 "fast_io_fail_timeout_sec": 0, 00:17:29.611 "generate_uuids": false, 00:17:29.611 "high_priority_weight": 0, 00:17:29.611 "io_path_stat": false, 00:17:29.611 "io_queue_requests": 512, 00:17:29.611 "keep_alive_timeout_ms": 10000, 00:17:29.611 "low_priority_weight": 0, 00:17:29.611 "medium_priority_weight": 0, 00:17:29.611 "nvme_adminq_poll_period_us": 10000, 00:17:29.611 "nvme_error_stat": false, 00:17:29.611 "nvme_ioq_poll_period_us": 0, 00:17:29.611 "rdma_cm_event_timeout_ms": 0, 00:17:29.611 "rdma_max_cq_size": 0, 00:17:29.611 "rdma_srq_size": 0, 00:17:29.611 "reconnect_delay_sec": 0, 00:17:29.611 "timeout_admin_us": 0, 00:17:29.611 "timeout_us": 0, 00:17:29.611 "transport_ack_timeout": 0, 00:17:29.611 "transport_retry_count": 4, 00:17:29.611 "transport_tos": 0 00:17:29.611 } 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "method": "bdev_nvme_attach_controller", 00:17:29.611 "params": { 00:17:29.611 "adrfam": "IPv4", 00:17:29.611 "ctrlr_loss_timeout_sec": 0, 00:17:29.611 "ddgst": false, 00:17:29.611 "fast_io_fail_timeout_sec": 0, 00:17:29.611 "hdgst": false, 00:17:29.611 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.611 "name": "TLSTEST", 00:17:29.611 "prchk_guard": false, 00:17:29.611 "prchk_reftag": false, 00:17:29.611 "psk": "/tmp/tmp.krxck21OLc", 00:17:29.611 "reconnect_delay_sec": 0, 00:17:29.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.611 "traddr": "10.0.0.2", 00:17:29.611 "trsvcid": "4420", 00:17:29.611 "trtype": "TCP" 00:17:29.611 } 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "method": "bdev_nvme_set_hotplug", 00:17:29.611 "params": { 00:17:29.611 "enable": false, 00:17:29.611 "period_us": 100000 00:17:29.611 } 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "method": "bdev_wait_for_examine" 00:17:29.611 } 00:17:29.611 ] 00:17:29.611 }, 00:17:29.611 { 00:17:29.611 "subsystem": "nbd", 00:17:29.611 "config": [] 00:17:29.611 } 00:17:29.611 ] 00:17:29.611 }' 00:17:29.611 18:30:45 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 83898 00:17:29.611 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83898 ']' 00:17:29.611 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83898 00:17:29.611 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:29.611 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:29.612 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83898 00:17:29.612 killing process with pid 83898 00:17:29.612 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.612 00:17:29.612 Latency(us) 00:17:29.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.612 =================================================================================================================== 00:17:29.612 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.612 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:29.612 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:29.612 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83898' 00:17:29.612 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83898 00:17:29.612 [2024-05-13 18:30:45.505018] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:29.612 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83898 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 83791 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83791 ']' 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83791 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83791 00:17:29.870 killing process with pid 83791 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83791' 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83791 00:17:29.870 [2024-05-13 18:30:45.786355] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:29.870 [2024-05-13 18:30:45.786403] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:29.870 18:30:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83791 00:17:30.128 18:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:30.128 18:30:46 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:17:30.128 "subsystems": [ 00:17:30.128 { 00:17:30.128 "subsystem": "keyring", 00:17:30.128 "config": [] 00:17:30.128 }, 00:17:30.128 { 00:17:30.128 "subsystem": "iobuf", 00:17:30.128 "config": [ 00:17:30.128 { 00:17:30.128 "method": "iobuf_set_options", 00:17:30.128 "params": { 00:17:30.128 "large_bufsize": 135168, 00:17:30.128 "large_pool_count": 1024, 00:17:30.128 "small_bufsize": 8192, 00:17:30.128 "small_pool_count": 8192 00:17:30.128 } 00:17:30.128 } 00:17:30.128 ] 00:17:30.128 }, 00:17:30.128 { 00:17:30.128 "subsystem": "sock", 00:17:30.128 "config": [ 00:17:30.128 { 00:17:30.128 "method": "sock_impl_set_options", 00:17:30.128 "params": { 00:17:30.128 "enable_ktls": false, 00:17:30.128 "enable_placement_id": 0, 00:17:30.128 "enable_quickack": false, 00:17:30.128 "enable_recv_pipe": true, 00:17:30.128 "enable_zerocopy_send_client": false, 00:17:30.128 "enable_zerocopy_send_server": true, 00:17:30.128 "impl_name": "posix", 00:17:30.128 "recv_buf_size": 2097152, 00:17:30.128 "send_buf_size": 2097152, 00:17:30.128 "tls_version": 0, 00:17:30.128 "zerocopy_threshold": 0 00:17:30.128 } 00:17:30.128 }, 00:17:30.128 { 00:17:30.128 "method": "sock_impl_set_options", 00:17:30.128 "params": { 00:17:30.128 "enable_ktls": false, 00:17:30.128 "enable_placement_id": 0, 00:17:30.128 "enable_quickack": false, 00:17:30.128 "enable_recv_pipe": true, 00:17:30.128 "enable_zerocopy_send_client": false, 00:17:30.128 "enable_zerocopy_send_server": true, 00:17:30.128 "impl_name": "ssl", 00:17:30.128 "recv_buf_size": 4096, 00:17:30.128 "send_buf_size": 4096, 00:17:30.128 "tls_version": 0, 00:17:30.128 "zerocopy_threshold": 0 00:17:30.128 } 00:17:30.128 } 00:17:30.128 ] 00:17:30.128 }, 00:17:30.128 { 00:17:30.128 "subsystem": "vmd", 00:17:30.129 "config": [] 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "subsystem": "accel", 00:17:30.129 "config": [ 00:17:30.129 { 00:17:30.129 "method": "accel_set_options", 00:17:30.129 "params": { 00:17:30.129 "buf_count": 2048, 00:17:30.129 "large_cache_size": 16, 00:17:30.129 "sequence_count": 2048, 00:17:30.129 "small_cache_size": 128, 00:17:30.129 "task_count": 2048 00:17:30.129 } 00:17:30.129 } 00:17:30.129 ] 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "subsystem": "bdev", 00:17:30.129 "config": [ 00:17:30.129 { 00:17:30.129 "method": "bdev_set_options", 00:17:30.129 "params": { 00:17:30.129 "bdev_auto_examine": true, 00:17:30.129 "bdev_io_cache_size": 256, 00:17:30.129 "bdev_io_pool_size": 65535, 00:17:30.129 "iobuf_large_cache_size": 16, 00:17:30.129 "iobuf_small_cache_size": 128 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "bdev_raid_set_options", 00:17:30.129 "params": { 00:17:30.129 "process_window_size_kb": 1024 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "bdev_iscsi_set_options", 00:17:30.129 "params": { 00:17:30.129 "timeout_sec": 30 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "bdev_nvme_set_options", 00:17:30.129 "params": { 00:17:30.129 "action_on_timeout": "none", 00:17:30.129 "allow_accel_sequence": false, 00:17:30.129 "arbitration_burst": 0, 00:17:30.129 "bdev_retry_count": 3, 00:17:30.129 "ctrlr_loss_timeout_sec": 0, 00:17:30.129 "delay_cmd_submit": true, 00:17:30.129 "dhchap_dhgroups": [ 00:17:30.129 "null", 00:17:30.129 "ffdhe2048", 00:17:30.129 "ffdhe3072", 00:17:30.129 "ffdhe4096", 00:17:30.129 "ffdhe6144", 00:17:30.129 "ffdhe8192" 00:17:30.129 ], 00:17:30.129 "dhchap_digests": [ 00:17:30.129 "sha256", 00:17:30.129 "sha384", 00:17:30.129 "sha512" 00:17:30.129 ], 00:17:30.129 "disable_auto_failback": false, 00:17:30.129 "fast_io_fail_timeout_sec": 0, 00:17:30.129 "generate_uuids": false, 00:17:30.129 "high_priority_weight": 0, 00:17:30.129 "io_path_stat": false, 00:17:30.129 "io_queue_requests": 0, 00:17:30.129 "keep_alive_timeout_ms": 10000, 00:17:30.129 "low_priority_weight": 0, 00:17:30.129 "medium_priority_weight": 0, 00:17:30.129 "nvme_adminq_poll_period_us": 10000, 00:17:30.129 "nvme_error_stat": false, 00:17:30.129 "nvme_ioq_poll_period_us": 0, 00:17:30.129 "rdma_cm_event_timeout_ms": 0, 00:17:30.129 "rdma_max_cq_size": 0, 00:17:30.129 "rdma_srq_size": 0, 00:17:30.129 "reconnect_delay_sec": 0, 00:17:30.129 "timeout_admin_us": 0, 00:17:30.129 "timeout_us": 0, 00:17:30.129 "transport_ack_timeout": 0, 00:17:30.129 "transport_retry_count": 4, 00:17:30.129 "transport_tos": 0 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "bdev_nvme_set_hotplug", 00:17:30.129 "params": { 00:17:30.129 "enable": false, 00:17:30.129 "period_us": 100000 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "bdev_malloc_create", 00:17:30.129 "params": { 00:17:30.129 "block_size": 4096, 00:17:30.129 "name": "malloc0", 00:17:30.129 "num_blocks": 8192, 00:17:30.129 "optimal_io_boundary": 0, 00:17:30.129 "physical_block_size": 4096, 00:17:30.129 "uuid": "ddaf231c-5cbe-45e8-9bed-5332f186f8cd" 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "bdev_wait_for_examine" 00:17:30.129 } 00:17:30.129 ] 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "subsystem": "nbd", 00:17:30.129 "config": [] 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "subsystem": "scheduler", 00:17:30.129 "config": [ 00:17:30.129 { 00:17:30.129 "method": "framework_set_scheduler", 00:17:30.129 "params": { 00:17:30.129 "name": "static" 00:17:30.129 } 00:17:30.129 } 00:17:30.129 ] 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "subsystem": "nvmf", 00:17:30.129 "config": [ 00:17:30.129 { 00:17:30.129 "method": "nvmf_set_config", 00:17:30.129 "params": { 00:17:30.129 "admin_cmd_passthru": { 00:17:30.129 "identify_ctrlr": false 00:17:30.129 }, 00:17:30.129 "discovery_filter": "match_any" 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "nvmf_set_max_subsystems", 00:17:30.129 "params": { 00:17:30.129 "max_subsystems": 1024 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "nvmf_set_crdt", 00:17:30.129 "params": { 00:17:30.129 "crdt1": 0, 00:17:30.129 "crdt2": 0, 00:17:30.129 "crdt3": 0 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "nvmf_create_transport", 00:17:30.129 "params": { 00:17:30.129 "abort_timeout_sec": 1, 00:17:30.129 "ack_timeout": 0, 00:17:30.129 "buf_cache_size": 4294967295, 00:17:30.129 "c2h_success": false, 00:17:30.129 "data_wr_pool_size": 0, 00:17:30.129 "dif_insert_or_strip": false, 00:17:30.129 "in_capsule_data_size": 4096, 00:17:30.129 "io_unit_size": 131072, 00:17:30.129 "max_aq_depth": 128, 00:17:30.129 "max_io_qpairs_per_ctrlr": 127, 00:17:30.129 "max_io_size": 131072, 00:17:30.129 "max_queue_depth": 128, 00:17:30.129 "num_shared_buffers": 511, 00:17:30.129 "sock_priority": 0, 00:17:30.129 "trtype": "TCP", 00:17:30.129 "zcopy": false 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "nvmf_create_subsystem", 00:17:30.129 "params": { 00:17:30.129 "allow_any_host": false, 00:17:30.129 "ana_reporting": false, 00:17:30.129 "max_cntlid": 65519, 00:17:30.129 "max_namespaces": 10, 00:17:30.129 "min_cntlid": 1, 00:17:30.129 "model_number": "SPDK bdev Controller", 00:17:30.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.129 "serial_number": "SPDK00000000000001" 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "nvmf_subsystem_add_host", 00:17:30.129 "params": { 00:17:30.129 "host": "nqn.2016-06.io.spdk:host1", 00:17:30.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.129 "psk": "/tmp/tmp.krxck21OLc" 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "nvmf_subsystem_add_ns", 00:17:30.129 "params": { 00:17:30.129 "namespace": { 00:17:30.129 "bdev_name": "malloc0", 00:17:30.129 "nguid": "DDAF231C5CBE45E89BED5332F186F8CD", 00:17:30.129 "no_auto_visible": false, 00:17:30.129 "nsid": 1, 00:17:30.129 "uuid": "ddaf231c-5cbe-45e8-9bed-5332f186f8cd" 00:17:30.129 }, 00:17:30.129 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:30.129 } 00:17:30.129 }, 00:17:30.129 { 00:17:30.129 "method": "nvmf_subsystem_add_listener", 00:17:30.129 "params": { 00:17:30.129 "listen_address": { 00:17:30.129 "adrfam": "IPv4", 00:17:30.129 "traddr": "10.0.0.2", 00:17:30.129 "trsvcid": "4420", 00:17:30.129 "trtype": "TCP" 00:17:30.129 }, 00:17:30.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.129 "secure_channel": true 00:17:30.129 } 00:17:30.129 } 00:17:30.129 ] 00:17:30.129 } 00:17:30.129 ] 00:17:30.129 }' 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=83972 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 83972 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 83972 ']' 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.129 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:30.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.130 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.130 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:30.130 18:30:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.387 [2024-05-13 18:30:46.118449] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:30.387 [2024-05-13 18:30:46.118553] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.387 [2024-05-13 18:30:46.259347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.645 [2024-05-13 18:30:46.373196] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.645 [2024-05-13 18:30:46.373255] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.645 [2024-05-13 18:30:46.373267] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.645 [2024-05-13 18:30:46.373276] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.645 [2024-05-13 18:30:46.373284] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.645 [2024-05-13 18:30:46.373375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.903 [2024-05-13 18:30:46.596632] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.903 [2024-05-13 18:30:46.612550] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:30.903 [2024-05-13 18:30:46.628522] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:30.903 [2024-05-13 18:30:46.628605] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.903 [2024-05-13 18:30:46.628820] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=84017 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 84017 /var/tmp/bdevperf.sock 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84017 ']' 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:31.470 18:30:47 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:17:31.470 "subsystems": [ 00:17:31.470 { 00:17:31.470 "subsystem": "keyring", 00:17:31.470 "config": [] 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "subsystem": "iobuf", 00:17:31.470 "config": [ 00:17:31.470 { 00:17:31.470 "method": "iobuf_set_options", 00:17:31.470 "params": { 00:17:31.470 "large_bufsize": 135168, 00:17:31.470 "large_pool_count": 1024, 00:17:31.470 "small_bufsize": 8192, 00:17:31.470 "small_pool_count": 8192 00:17:31.470 } 00:17:31.470 } 00:17:31.470 ] 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "subsystem": "sock", 00:17:31.470 "config": [ 00:17:31.470 { 00:17:31.470 "method": "sock_impl_set_options", 00:17:31.470 "params": { 00:17:31.470 "enable_ktls": false, 00:17:31.470 "enable_placement_id": 0, 00:17:31.470 "enable_quickack": false, 00:17:31.470 "enable_recv_pipe": true, 00:17:31.470 "enable_zerocopy_send_client": false, 00:17:31.470 "enable_zerocopy_send_server": true, 00:17:31.470 "impl_name": "posix", 00:17:31.470 "recv_buf_size": 2097152, 00:17:31.470 "send_buf_size": 2097152, 00:17:31.470 "tls_version": 0, 00:17:31.470 "zerocopy_threshold": 0 00:17:31.470 } 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "method": "sock_impl_set_options", 00:17:31.470 "params": { 00:17:31.470 "enable_ktls": false, 00:17:31.470 "enable_placement_id": 0, 00:17:31.470 "enable_quickack": false, 00:17:31.470 "enable_recv_pipe": true, 00:17:31.470 "enable_zerocopy_send_client": false, 00:17:31.470 "enable_zerocopy_send_server": true, 00:17:31.470 "impl_name": "ssl", 00:17:31.470 "recv_buf_size": 4096, 00:17:31.470 "send_buf_size": 4096, 00:17:31.470 "tls_version": 0, 00:17:31.470 "zerocopy_threshold": 0 00:17:31.470 } 00:17:31.470 } 00:17:31.470 ] 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "subsystem": "vmd", 00:17:31.470 "config": [] 00:17:31.470 }, 00:17:31.470 { 00:17:31.470 "subsystem": "accel", 00:17:31.470 "config": [ 00:17:31.470 { 00:17:31.470 "method": "accel_set_options", 00:17:31.470 "params": { 00:17:31.470 "buf_count": 2048, 00:17:31.470 "large_cache_size": 16, 00:17:31.471 "sequence_count": 2048, 00:17:31.471 "small_cache_size": 128, 00:17:31.471 "task_count": 2048 00:17:31.471 } 00:17:31.471 } 00:17:31.471 ] 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "subsystem": "bdev", 00:17:31.471 "config": [ 00:17:31.471 { 00:17:31.471 "method": "bdev_set_options", 00:17:31.471 "params": { 00:17:31.471 "bdev_auto_examine": true, 00:17:31.471 "bdev_io_cache_size": 256, 00:17:31.471 "bdev_io_pool_size": 65535, 00:17:31.471 "iobuf_large_cache_size": 16, 00:17:31.471 "iobuf_small_cache_size": 128 00:17:31.471 } 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "method": "bdev_raid_set_options", 00:17:31.471 "params": { 00:17:31.471 "process_window_size_kb": 1024 00:17:31.471 } 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "method": "bdev_iscsi_set_options", 00:17:31.471 "params": { 00:17:31.471 "timeout_sec": 30 00:17:31.471 } 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "method": "bdev_nvme_set_options", 00:17:31.471 "params": { 00:17:31.471 "action_on_timeout": "none", 00:17:31.471 "allow_accel_sequence": false, 00:17:31.471 "arbitration_burst": 0, 00:17:31.471 "bdev_retry_count": 3, 00:17:31.471 "ctrlr_loss_timeout_sec": 0, 00:17:31.471 "delay_cmd_submit": true, 00:17:31.471 "dhchap_dhgroups": [ 00:17:31.471 "null", 00:17:31.471 "ffdhe2048", 00:17:31.471 "ffdhe3072", 00:17:31.471 "ffdhe4096", 00:17:31.471 "ffdhe6144", 00:17:31.471 "ffdhe8192" 00:17:31.471 ], 00:17:31.471 "dhchap_digests": [ 00:17:31.471 "sha256", 00:17:31.471 "sha384", 00:17:31.471 "sha512" 00:17:31.471 ], 00:17:31.471 "disable_auto_failback": false, 00:17:31.471 "fast_io_fail_timeout_sec": 0, 00:17:31.471 "generate_uuids": false, 00:17:31.471 "high_priority_weight": 0, 00:17:31.471 "io_path_stat": false, 00:17:31.471 "io_queue_requests": 512, 00:17:31.471 "keep_alive_timeout_ms": 10000, 00:17:31.471 "low_priority_weight": 0, 00:17:31.471 "medium_priority_weight": 0, 00:17:31.471 "nvme_adminq_poll_period_us": 10000, 00:17:31.471 "nvme_error_stat": false, 00:17:31.471 "nvme_ioq_poll_period_us": 0, 00:17:31.471 "rdma_cm_event_timeout_ms": 0, 00:17:31.471 "rdma_max_cq_size": 0, 00:17:31.471 "rdma_srq_size": 0, 00:17:31.471 "reconnect_delay_sec": 0, 00:17:31.471 "timeout_admin_us": 0, 00:17:31.471 "timeout_us": 0, 00:17:31.471 "transport_ack_timeout": 0, 00:17:31.471 "transport_retry_count": 4, 00:17:31.471 "transport_tos": 0 00:17:31.471 } 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "method": "bdev_nvme_attach_controller", 00:17:31.471 "params": { 00:17:31.471 "adrfam": "IPv4", 00:17:31.471 "ctrlr_loss_timeout_sec": 0, 00:17:31.471 "ddgst": false, 00:17:31.471 "fast_io_fail_timeout_sec": 0, 00:17:31.471 "hdgst": false, 00:17:31.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:31.471 "name": "TLSTEST", 00:17:31.471 "prchk_guard": false, 00:17:31.471 "prchk_reftag": false, 00:17:31.471 "psk": "/tmp/tmp.krxck21OLc", 00:17:31.471 "reconnect_delay_sec": 0, 00:17:31.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:31.471 "traddr": "10.0.0.2", 00:17:31.471 "trsvcid": "4420", 00:17:31.471 "trtype": "TCP" 00:17:31.471 } 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "method": "bdev_nvme_set_hotplug", 00:17:31.471 "params": { 00:17:31.471 "enable": false, 00:17:31.471 "period_us": 100000 00:17:31.471 } 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "method": "bdev_wait_for_examine" 00:17:31.471 } 00:17:31.471 ] 00:17:31.471 }, 00:17:31.471 { 00:17:31.471 "subsystem": "nbd", 00:17:31.471 "config": [] 00:17:31.471 } 00:17:31.471 ] 00:17:31.471 }' 00:17:31.471 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.471 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:31.471 18:30:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.471 [2024-05-13 18:30:47.227281] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:31.471 [2024-05-13 18:30:47.227382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84017 ] 00:17:31.471 [2024-05-13 18:30:47.366986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.729 [2024-05-13 18:30:47.496364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.729 [2024-05-13 18:30:47.657766] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.729 [2024-05-13 18:30:47.657888] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:32.664 18:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.664 18:30:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:32.664 18:30:48 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:32.664 Running I/O for 10 seconds... 00:17:42.713 00:17:42.713 Latency(us) 00:17:42.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.713 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:42.713 Verification LBA range: start 0x0 length 0x2000 00:17:42.713 TLSTESTn1 : 10.02 3988.44 15.58 0.00 0.00 32030.76 7060.01 35270.28 00:17:42.713 =================================================================================================================== 00:17:42.713 Total : 3988.44 15.58 0.00 0.00 32030.76 7060.01 35270.28 00:17:42.713 0 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 84017 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84017 ']' 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84017 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84017 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84017' 00:17:42.713 killing process with pid 84017 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84017 00:17:42.713 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84017 00:17:42.713 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.713 00:17:42.713 Latency(us) 00:17:42.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.713 =================================================================================================================== 00:17:42.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:42.713 [2024-05-13 18:30:58.481213] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 83972 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 83972 ']' 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 83972 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83972 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:42.973 killing process with pid 83972 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83972' 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 83972 00:17:42.973 [2024-05-13 18:30:58.777130] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:42.973 [2024-05-13 18:30:58.777174] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:42.973 18:30:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 83972 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84164 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84164 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84164 ']' 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:43.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:43.234 18:30:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.234 [2024-05-13 18:30:59.129083] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:43.234 [2024-05-13 18:30:59.129220] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.492 [2024-05-13 18:30:59.275865] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.492 [2024-05-13 18:30:59.395726] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.492 [2024-05-13 18:30:59.395808] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.492 [2024-05-13 18:30:59.395835] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.492 [2024-05-13 18:30:59.395843] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.492 [2024-05-13 18:30:59.395850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.492 [2024-05-13 18:30:59.395878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.krxck21OLc 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.krxck21OLc 00:17:44.427 18:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:44.686 [2024-05-13 18:31:00.395922] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.686 18:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:44.944 18:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:45.202 [2024-05-13 18:31:00.916049] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:45.202 [2024-05-13 18:31:00.916167] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:45.202 [2024-05-13 18:31:00.916360] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.202 18:31:00 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:45.460 malloc0 00:17:45.460 18:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:45.718 18:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.krxck21OLc 00:17:45.976 [2024-05-13 18:31:01.735742] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=84271 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 84271 /var/tmp/bdevperf.sock 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84271 ']' 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:45.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:45.976 18:31:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.976 [2024-05-13 18:31:01.822338] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:45.976 [2024-05-13 18:31:01.822486] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84271 ] 00:17:46.234 [2024-05-13 18:31:01.961030] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.234 [2024-05-13 18:31:02.080784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.168 18:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:47.168 18:31:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:47.168 18:31:02 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.krxck21OLc 00:17:47.168 18:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:47.427 [2024-05-13 18:31:03.319861] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:47.685 nvme0n1 00:17:47.685 18:31:03 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:47.685 Running I/O for 1 seconds... 00:17:48.639 00:17:48.639 Latency(us) 00:17:48.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.639 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:48.639 Verification LBA range: start 0x0 length 0x2000 00:17:48.639 nvme0n1 : 1.02 3760.86 14.69 0.00 0.00 33649.01 7238.75 21209.83 00:17:48.639 =================================================================================================================== 00:17:48.639 Total : 3760.86 14.69 0.00 0.00 33649.01 7238.75 21209.83 00:17:48.639 0 00:17:48.639 18:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 84271 00:17:48.640 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84271 ']' 00:17:48.640 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84271 00:17:48.640 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:48.640 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:48.640 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84271 00:17:48.898 killing process with pid 84271 00:17:48.898 Received shutdown signal, test time was about 1.000000 seconds 00:17:48.898 00:17:48.898 Latency(us) 00:17:48.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.898 =================================================================================================================== 00:17:48.898 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:48.898 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:48.898 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:48.898 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84271' 00:17:48.898 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84271 00:17:48.898 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84271 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 84164 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84164 ']' 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84164 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84164 00:17:49.157 killing process with pid 84164 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84164' 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84164 00:17:49.157 [2024-05-13 18:31:04.873320] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:49.157 [2024-05-13 18:31:04.873368] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:49.157 18:31:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84164 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84345 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84345 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84345 ']' 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:49.417 18:31:05 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:49.417 [2024-05-13 18:31:05.205732] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:49.417 [2024-05-13 18:31:05.205985] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.417 [2024-05-13 18:31:05.339627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.702 [2024-05-13 18:31:05.457778] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:49.702 [2024-05-13 18:31:05.458039] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:49.702 [2024-05-13 18:31:05.458184] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:49.702 [2024-05-13 18:31:05.458331] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:49.702 [2024-05-13 18:31:05.458367] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:49.702 [2024-05-13 18:31:05.458498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.269 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:50.269 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:50.269 18:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:50.269 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.269 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.528 [2024-05-13 18:31:06.254486] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.528 malloc0 00:17:50.528 [2024-05-13 18:31:06.286134] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:50.528 [2024-05-13 18:31:06.286216] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:50.528 [2024-05-13 18:31:06.286394] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=84391 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 84391 /var/tmp/bdevperf.sock 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84391 ']' 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:50.528 18:31:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.528 [2024-05-13 18:31:06.377663] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:50.529 [2024-05-13 18:31:06.377779] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84391 ] 00:17:50.787 [2024-05-13 18:31:06.517191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.787 [2024-05-13 18:31:06.647084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.723 18:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:51.723 18:31:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:51.723 18:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.krxck21OLc 00:17:51.723 18:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:51.981 [2024-05-13 18:31:07.867691] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.240 nvme0n1 00:17:52.240 18:31:07 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.240 Running I/O for 1 seconds... 00:17:53.175 00:17:53.175 Latency(us) 00:17:53.175 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.175 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:53.175 Verification LBA range: start 0x0 length 0x2000 00:17:53.175 nvme0n1 : 1.02 3776.39 14.75 0.00 0.00 33483.70 789.41 20494.89 00:17:53.175 =================================================================================================================== 00:17:53.175 Total : 3776.39 14.75 0.00 0.00 33483.70 789.41 20494.89 00:17:53.175 0 00:17:53.175 18:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:17:53.175 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.175 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.434 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.434 18:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:17:53.434 "subsystems": [ 00:17:53.434 { 00:17:53.434 "subsystem": "keyring", 00:17:53.434 "config": [ 00:17:53.434 { 00:17:53.434 "method": "keyring_file_add_key", 00:17:53.434 "params": { 00:17:53.434 "name": "key0", 00:17:53.434 "path": "/tmp/tmp.krxck21OLc" 00:17:53.434 } 00:17:53.434 } 00:17:53.434 ] 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "subsystem": "iobuf", 00:17:53.434 "config": [ 00:17:53.434 { 00:17:53.434 "method": "iobuf_set_options", 00:17:53.434 "params": { 00:17:53.434 "large_bufsize": 135168, 00:17:53.434 "large_pool_count": 1024, 00:17:53.434 "small_bufsize": 8192, 00:17:53.434 "small_pool_count": 8192 00:17:53.434 } 00:17:53.434 } 00:17:53.434 ] 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "subsystem": "sock", 00:17:53.434 "config": [ 00:17:53.434 { 00:17:53.434 "method": "sock_impl_set_options", 00:17:53.434 "params": { 00:17:53.434 "enable_ktls": false, 00:17:53.434 "enable_placement_id": 0, 00:17:53.434 "enable_quickack": false, 00:17:53.434 "enable_recv_pipe": true, 00:17:53.434 "enable_zerocopy_send_client": false, 00:17:53.434 "enable_zerocopy_send_server": true, 00:17:53.434 "impl_name": "posix", 00:17:53.434 "recv_buf_size": 2097152, 00:17:53.434 "send_buf_size": 2097152, 00:17:53.434 "tls_version": 0, 00:17:53.434 "zerocopy_threshold": 0 00:17:53.434 } 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "method": "sock_impl_set_options", 00:17:53.434 "params": { 00:17:53.434 "enable_ktls": false, 00:17:53.434 "enable_placement_id": 0, 00:17:53.434 "enable_quickack": false, 00:17:53.434 "enable_recv_pipe": true, 00:17:53.434 "enable_zerocopy_send_client": false, 00:17:53.434 "enable_zerocopy_send_server": true, 00:17:53.434 "impl_name": "ssl", 00:17:53.434 "recv_buf_size": 4096, 00:17:53.434 "send_buf_size": 4096, 00:17:53.434 "tls_version": 0, 00:17:53.434 "zerocopy_threshold": 0 00:17:53.434 } 00:17:53.434 } 00:17:53.434 ] 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "subsystem": "vmd", 00:17:53.434 "config": [] 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "subsystem": "accel", 00:17:53.434 "config": [ 00:17:53.434 { 00:17:53.434 "method": "accel_set_options", 00:17:53.434 "params": { 00:17:53.434 "buf_count": 2048, 00:17:53.434 "large_cache_size": 16, 00:17:53.434 "sequence_count": 2048, 00:17:53.434 "small_cache_size": 128, 00:17:53.434 "task_count": 2048 00:17:53.434 } 00:17:53.434 } 00:17:53.434 ] 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "subsystem": "bdev", 00:17:53.434 "config": [ 00:17:53.434 { 00:17:53.434 "method": "bdev_set_options", 00:17:53.434 "params": { 00:17:53.434 "bdev_auto_examine": true, 00:17:53.434 "bdev_io_cache_size": 256, 00:17:53.434 "bdev_io_pool_size": 65535, 00:17:53.434 "iobuf_large_cache_size": 16, 00:17:53.434 "iobuf_small_cache_size": 128 00:17:53.434 } 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "method": "bdev_raid_set_options", 00:17:53.434 "params": { 00:17:53.434 "process_window_size_kb": 1024 00:17:53.434 } 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "method": "bdev_iscsi_set_options", 00:17:53.434 "params": { 00:17:53.434 "timeout_sec": 30 00:17:53.434 } 00:17:53.434 }, 00:17:53.434 { 00:17:53.434 "method": "bdev_nvme_set_options", 00:17:53.434 "params": { 00:17:53.434 "action_on_timeout": "none", 00:17:53.434 "allow_accel_sequence": false, 00:17:53.434 "arbitration_burst": 0, 00:17:53.434 "bdev_retry_count": 3, 00:17:53.435 "ctrlr_loss_timeout_sec": 0, 00:17:53.435 "delay_cmd_submit": true, 00:17:53.435 "dhchap_dhgroups": [ 00:17:53.435 "null", 00:17:53.435 "ffdhe2048", 00:17:53.435 "ffdhe3072", 00:17:53.435 "ffdhe4096", 00:17:53.435 "ffdhe6144", 00:17:53.435 "ffdhe8192" 00:17:53.435 ], 00:17:53.435 "dhchap_digests": [ 00:17:53.435 "sha256", 00:17:53.435 "sha384", 00:17:53.435 "sha512" 00:17:53.435 ], 00:17:53.435 "disable_auto_failback": false, 00:17:53.435 "fast_io_fail_timeout_sec": 0, 00:17:53.435 "generate_uuids": false, 00:17:53.435 "high_priority_weight": 0, 00:17:53.435 "io_path_stat": false, 00:17:53.435 "io_queue_requests": 0, 00:17:53.435 "keep_alive_timeout_ms": 10000, 00:17:53.435 "low_priority_weight": 0, 00:17:53.435 "medium_priority_weight": 0, 00:17:53.435 "nvme_adminq_poll_period_us": 10000, 00:17:53.435 "nvme_error_stat": false, 00:17:53.435 "nvme_ioq_poll_period_us": 0, 00:17:53.435 "rdma_cm_event_timeout_ms": 0, 00:17:53.435 "rdma_max_cq_size": 0, 00:17:53.435 "rdma_srq_size": 0, 00:17:53.435 "reconnect_delay_sec": 0, 00:17:53.435 "timeout_admin_us": 0, 00:17:53.435 "timeout_us": 0, 00:17:53.435 "transport_ack_timeout": 0, 00:17:53.435 "transport_retry_count": 4, 00:17:53.435 "transport_tos": 0 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "bdev_nvme_set_hotplug", 00:17:53.435 "params": { 00:17:53.435 "enable": false, 00:17:53.435 "period_us": 100000 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "bdev_malloc_create", 00:17:53.435 "params": { 00:17:53.435 "block_size": 4096, 00:17:53.435 "name": "malloc0", 00:17:53.435 "num_blocks": 8192, 00:17:53.435 "optimal_io_boundary": 0, 00:17:53.435 "physical_block_size": 4096, 00:17:53.435 "uuid": "44f72dc3-f739-4014-97be-f22fe6c1a4d6" 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "bdev_wait_for_examine" 00:17:53.435 } 00:17:53.435 ] 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "subsystem": "nbd", 00:17:53.435 "config": [] 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "subsystem": "scheduler", 00:17:53.435 "config": [ 00:17:53.435 { 00:17:53.435 "method": "framework_set_scheduler", 00:17:53.435 "params": { 00:17:53.435 "name": "static" 00:17:53.435 } 00:17:53.435 } 00:17:53.435 ] 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "subsystem": "nvmf", 00:17:53.435 "config": [ 00:17:53.435 { 00:17:53.435 "method": "nvmf_set_config", 00:17:53.435 "params": { 00:17:53.435 "admin_cmd_passthru": { 00:17:53.435 "identify_ctrlr": false 00:17:53.435 }, 00:17:53.435 "discovery_filter": "match_any" 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "nvmf_set_max_subsystems", 00:17:53.435 "params": { 00:17:53.435 "max_subsystems": 1024 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "nvmf_set_crdt", 00:17:53.435 "params": { 00:17:53.435 "crdt1": 0, 00:17:53.435 "crdt2": 0, 00:17:53.435 "crdt3": 0 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "nvmf_create_transport", 00:17:53.435 "params": { 00:17:53.435 "abort_timeout_sec": 1, 00:17:53.435 "ack_timeout": 0, 00:17:53.435 "buf_cache_size": 4294967295, 00:17:53.435 "c2h_success": false, 00:17:53.435 "data_wr_pool_size": 0, 00:17:53.435 "dif_insert_or_strip": false, 00:17:53.435 "in_capsule_data_size": 4096, 00:17:53.435 "io_unit_size": 131072, 00:17:53.435 "max_aq_depth": 128, 00:17:53.435 "max_io_qpairs_per_ctrlr": 127, 00:17:53.435 "max_io_size": 131072, 00:17:53.435 "max_queue_depth": 128, 00:17:53.435 "num_shared_buffers": 511, 00:17:53.435 "sock_priority": 0, 00:17:53.435 "trtype": "TCP", 00:17:53.435 "zcopy": false 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "nvmf_create_subsystem", 00:17:53.435 "params": { 00:17:53.435 "allow_any_host": false, 00:17:53.435 "ana_reporting": false, 00:17:53.435 "max_cntlid": 65519, 00:17:53.435 "max_namespaces": 32, 00:17:53.435 "min_cntlid": 1, 00:17:53.435 "model_number": "SPDK bdev Controller", 00:17:53.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.435 "serial_number": "00000000000000000000" 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "nvmf_subsystem_add_host", 00:17:53.435 "params": { 00:17:53.435 "host": "nqn.2016-06.io.spdk:host1", 00:17:53.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.435 "psk": "key0" 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "nvmf_subsystem_add_ns", 00:17:53.435 "params": { 00:17:53.435 "namespace": { 00:17:53.435 "bdev_name": "malloc0", 00:17:53.435 "nguid": "44F72DC3F739401497BEF22FE6C1A4D6", 00:17:53.435 "no_auto_visible": false, 00:17:53.435 "nsid": 1, 00:17:53.435 "uuid": "44f72dc3-f739-4014-97be-f22fe6c1a4d6" 00:17:53.435 }, 00:17:53.435 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:53.435 } 00:17:53.435 }, 00:17:53.435 { 00:17:53.435 "method": "nvmf_subsystem_add_listener", 00:17:53.435 "params": { 00:17:53.435 "listen_address": { 00:17:53.435 "adrfam": "IPv4", 00:17:53.435 "traddr": "10.0.0.2", 00:17:53.435 "trsvcid": "4420", 00:17:53.435 "trtype": "TCP" 00:17:53.435 }, 00:17:53.435 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.435 "secure_channel": true 00:17:53.435 } 00:17:53.435 } 00:17:53.435 ] 00:17:53.435 } 00:17:53.435 ] 00:17:53.435 }' 00:17:53.435 18:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:53.695 18:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:17:53.695 "subsystems": [ 00:17:53.695 { 00:17:53.695 "subsystem": "keyring", 00:17:53.695 "config": [ 00:17:53.695 { 00:17:53.695 "method": "keyring_file_add_key", 00:17:53.696 "params": { 00:17:53.696 "name": "key0", 00:17:53.696 "path": "/tmp/tmp.krxck21OLc" 00:17:53.696 } 00:17:53.696 } 00:17:53.696 ] 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "subsystem": "iobuf", 00:17:53.696 "config": [ 00:17:53.696 { 00:17:53.696 "method": "iobuf_set_options", 00:17:53.696 "params": { 00:17:53.696 "large_bufsize": 135168, 00:17:53.696 "large_pool_count": 1024, 00:17:53.696 "small_bufsize": 8192, 00:17:53.696 "small_pool_count": 8192 00:17:53.696 } 00:17:53.696 } 00:17:53.696 ] 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "subsystem": "sock", 00:17:53.696 "config": [ 00:17:53.696 { 00:17:53.696 "method": "sock_impl_set_options", 00:17:53.696 "params": { 00:17:53.696 "enable_ktls": false, 00:17:53.696 "enable_placement_id": 0, 00:17:53.696 "enable_quickack": false, 00:17:53.696 "enable_recv_pipe": true, 00:17:53.696 "enable_zerocopy_send_client": false, 00:17:53.696 "enable_zerocopy_send_server": true, 00:17:53.696 "impl_name": "posix", 00:17:53.696 "recv_buf_size": 2097152, 00:17:53.696 "send_buf_size": 2097152, 00:17:53.696 "tls_version": 0, 00:17:53.696 "zerocopy_threshold": 0 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "sock_impl_set_options", 00:17:53.696 "params": { 00:17:53.696 "enable_ktls": false, 00:17:53.696 "enable_placement_id": 0, 00:17:53.696 "enable_quickack": false, 00:17:53.696 "enable_recv_pipe": true, 00:17:53.696 "enable_zerocopy_send_client": false, 00:17:53.696 "enable_zerocopy_send_server": true, 00:17:53.696 "impl_name": "ssl", 00:17:53.696 "recv_buf_size": 4096, 00:17:53.696 "send_buf_size": 4096, 00:17:53.696 "tls_version": 0, 00:17:53.696 "zerocopy_threshold": 0 00:17:53.696 } 00:17:53.696 } 00:17:53.696 ] 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "subsystem": "vmd", 00:17:53.696 "config": [] 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "subsystem": "accel", 00:17:53.696 "config": [ 00:17:53.696 { 00:17:53.696 "method": "accel_set_options", 00:17:53.696 "params": { 00:17:53.696 "buf_count": 2048, 00:17:53.696 "large_cache_size": 16, 00:17:53.696 "sequence_count": 2048, 00:17:53.696 "small_cache_size": 128, 00:17:53.696 "task_count": 2048 00:17:53.696 } 00:17:53.696 } 00:17:53.696 ] 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "subsystem": "bdev", 00:17:53.696 "config": [ 00:17:53.696 { 00:17:53.696 "method": "bdev_set_options", 00:17:53.696 "params": { 00:17:53.696 "bdev_auto_examine": true, 00:17:53.696 "bdev_io_cache_size": 256, 00:17:53.696 "bdev_io_pool_size": 65535, 00:17:53.696 "iobuf_large_cache_size": 16, 00:17:53.696 "iobuf_small_cache_size": 128 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "bdev_raid_set_options", 00:17:53.696 "params": { 00:17:53.696 "process_window_size_kb": 1024 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "bdev_iscsi_set_options", 00:17:53.696 "params": { 00:17:53.696 "timeout_sec": 30 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "bdev_nvme_set_options", 00:17:53.696 "params": { 00:17:53.696 "action_on_timeout": "none", 00:17:53.696 "allow_accel_sequence": false, 00:17:53.696 "arbitration_burst": 0, 00:17:53.696 "bdev_retry_count": 3, 00:17:53.696 "ctrlr_loss_timeout_sec": 0, 00:17:53.696 "delay_cmd_submit": true, 00:17:53.696 "dhchap_dhgroups": [ 00:17:53.696 "null", 00:17:53.696 "ffdhe2048", 00:17:53.696 "ffdhe3072", 00:17:53.696 "ffdhe4096", 00:17:53.696 "ffdhe6144", 00:17:53.696 "ffdhe8192" 00:17:53.696 ], 00:17:53.696 "dhchap_digests": [ 00:17:53.696 "sha256", 00:17:53.696 "sha384", 00:17:53.696 "sha512" 00:17:53.696 ], 00:17:53.696 "disable_auto_failback": false, 00:17:53.696 "fast_io_fail_timeout_sec": 0, 00:17:53.696 "generate_uuids": false, 00:17:53.696 "high_priority_weight": 0, 00:17:53.696 "io_path_stat": false, 00:17:53.696 "io_queue_requests": 512, 00:17:53.696 "keep_alive_timeout_ms": 10000, 00:17:53.696 "low_priority_weight": 0, 00:17:53.696 "medium_priority_weight": 0, 00:17:53.696 "nvme_adminq_poll_period_us": 10000, 00:17:53.696 "nvme_error_stat": false, 00:17:53.696 "nvme_ioq_poll_period_us": 0, 00:17:53.696 "rdma_cm_event_timeout_ms": 0, 00:17:53.696 "rdma_max_cq_size": 0, 00:17:53.696 "rdma_srq_size": 0, 00:17:53.696 "reconnect_delay_sec": 0, 00:17:53.696 "timeout_admin_us": 0, 00:17:53.696 "timeout_us": 0, 00:17:53.696 "transport_ack_timeout": 0, 00:17:53.696 "transport_retry_count": 4, 00:17:53.696 "transport_tos": 0 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "bdev_nvme_attach_controller", 00:17:53.696 "params": { 00:17:53.696 "adrfam": "IPv4", 00:17:53.696 "ctrlr_loss_timeout_sec": 0, 00:17:53.696 "ddgst": false, 00:17:53.696 "fast_io_fail_timeout_sec": 0, 00:17:53.696 "hdgst": false, 00:17:53.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:53.696 "name": "nvme0", 00:17:53.696 "prchk_guard": false, 00:17:53.696 "prchk_reftag": false, 00:17:53.696 "psk": "key0", 00:17:53.696 "reconnect_delay_sec": 0, 00:17:53.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:53.696 "traddr": "10.0.0.2", 00:17:53.696 "trsvcid": "4420", 00:17:53.696 "trtype": "TCP" 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "bdev_nvme_set_hotplug", 00:17:53.696 "params": { 00:17:53.696 "enable": false, 00:17:53.696 "period_us": 100000 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "bdev_enable_histogram", 00:17:53.696 "params": { 00:17:53.696 "enable": true, 00:17:53.696 "name": "nvme0n1" 00:17:53.696 } 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "method": "bdev_wait_for_examine" 00:17:53.696 } 00:17:53.696 ] 00:17:53.696 }, 00:17:53.696 { 00:17:53.696 "subsystem": "nbd", 00:17:53.696 "config": [] 00:17:53.696 } 00:17:53.696 ] 00:17:53.696 }' 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 84391 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84391 ']' 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84391 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84391 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:53.696 killing process with pid 84391 00:17:53.696 Received shutdown signal, test time was about 1.000000 seconds 00:17:53.696 00:17:53.696 Latency(us) 00:17:53.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.696 =================================================================================================================== 00:17:53.696 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84391' 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84391 00:17:53.696 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84391 00:17:53.955 18:31:09 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 84345 00:17:53.955 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84345 ']' 00:17:53.955 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84345 00:17:53.955 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:53.956 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:53.956 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84345 00:17:53.956 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:53.956 killing process with pid 84345 00:17:53.956 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:53.956 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84345' 00:17:53.956 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84345 00:17:53.956 [2024-05-13 18:31:09.872005] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:53.956 18:31:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84345 00:17:54.215 18:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:17:54.215 18:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:54.215 18:31:10 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:17:54.215 "subsystems": [ 00:17:54.215 { 00:17:54.215 "subsystem": "keyring", 00:17:54.215 "config": [ 00:17:54.215 { 00:17:54.215 "method": "keyring_file_add_key", 00:17:54.215 "params": { 00:17:54.215 "name": "key0", 00:17:54.215 "path": "/tmp/tmp.krxck21OLc" 00:17:54.215 } 00:17:54.215 } 00:17:54.215 ] 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "subsystem": "iobuf", 00:17:54.215 "config": [ 00:17:54.215 { 00:17:54.215 "method": "iobuf_set_options", 00:17:54.215 "params": { 00:17:54.215 "large_bufsize": 135168, 00:17:54.215 "large_pool_count": 1024, 00:17:54.215 "small_bufsize": 8192, 00:17:54.215 "small_pool_count": 8192 00:17:54.215 } 00:17:54.215 } 00:17:54.215 ] 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "subsystem": "sock", 00:17:54.215 "config": [ 00:17:54.215 { 00:17:54.215 "method": "sock_impl_set_options", 00:17:54.215 "params": { 00:17:54.215 "enable_ktls": false, 00:17:54.215 "enable_placement_id": 0, 00:17:54.215 "enable_quickack": false, 00:17:54.215 "enable_recv_pipe": true, 00:17:54.215 "enable_zerocopy_send_client": false, 00:17:54.215 "enable_zerocopy_send_server": true, 00:17:54.215 "impl_name": "posix", 00:17:54.215 "recv_buf_size": 2097152, 00:17:54.215 "send_buf_size": 2097152, 00:17:54.215 "tls_version": 0, 00:17:54.215 "zerocopy_threshold": 0 00:17:54.215 } 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "method": "sock_impl_set_options", 00:17:54.215 "params": { 00:17:54.215 "enable_ktls": false, 00:17:54.215 "enable_placement_id": 0, 00:17:54.215 "enable_quickack": false, 00:17:54.215 "enable_recv_pipe": true, 00:17:54.215 "enable_zerocopy_send_client": false, 00:17:54.215 "enable_zerocopy_send_server": true, 00:17:54.215 "impl_name": "ssl", 00:17:54.215 "recv_buf_size": 4096, 00:17:54.215 "send_buf_size": 4096, 00:17:54.215 "tls_version": 0, 00:17:54.215 "zerocopy_threshold": 0 00:17:54.215 } 00:17:54.215 } 00:17:54.215 ] 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "subsystem": "vmd", 00:17:54.215 "config": [] 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "subsystem": "accel", 00:17:54.215 "config": [ 00:17:54.215 { 00:17:54.215 "method": "accel_set_options", 00:17:54.215 "params": { 00:17:54.215 "buf_count": 2048, 00:17:54.215 "large_cache_size": 16, 00:17:54.215 "sequence_count": 2048, 00:17:54.215 "small_cache_size": 128, 00:17:54.215 "task_count": 2048 00:17:54.215 } 00:17:54.215 } 00:17:54.215 ] 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "subsystem": "bdev", 00:17:54.215 "config": [ 00:17:54.215 { 00:17:54.215 "method": "bdev_set_options", 00:17:54.215 "params": { 00:17:54.215 "bdev_auto_examine": true, 00:17:54.215 "bdev_io_cache_size": 256, 00:17:54.215 "bdev_io_pool_size": 65535, 00:17:54.215 "iobuf_large_cache_size": 16, 00:17:54.215 "iobuf_small_cache_size": 128 00:17:54.215 } 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "method": "bdev_raid_set_options", 00:17:54.215 "params": { 00:17:54.215 "process_window_size_kb": 1024 00:17:54.215 } 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "method": "bdev_iscsi_set_options", 00:17:54.215 "params": { 00:17:54.215 "timeout_sec": 30 00:17:54.215 } 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "method": "bdev_nvme_set_options", 00:17:54.215 "params": { 00:17:54.215 "action_on_timeout": "none", 00:17:54.215 "allow_accel_sequence": false, 00:17:54.215 "arbitration_burst": 0, 00:17:54.215 "bdev_retry_count": 3, 00:17:54.215 "ctrlr_loss_timeout_sec": 0, 00:17:54.215 "delay_cmd_submit": true, 00:17:54.215 "dhchap_dhgroups": [ 00:17:54.215 "null", 00:17:54.215 "ffdhe2048", 00:17:54.215 "ffdhe3072", 00:17:54.215 "ffdhe4096", 00:17:54.215 "ffdhe6144", 00:17:54.215 "ffdhe8192" 00:17:54.215 ], 00:17:54.215 "dhchap_digests": [ 00:17:54.215 "sha256", 00:17:54.215 "sha384", 00:17:54.215 "sha512" 00:17:54.215 ], 00:17:54.215 "disable_auto_failback": false, 00:17:54.215 "fast_io_fail_timeout_sec": 0, 00:17:54.215 "generate_uuids": false, 00:17:54.215 "high_priority_weight": 0, 00:17:54.215 "io_path_stat": false, 00:17:54.215 "io_queue_requests": 0, 00:17:54.215 "keep_alive_timeout_ms": 10000, 00:17:54.215 "low_priority_weight": 0, 00:17:54.215 "medium_priority_weight": 0, 00:17:54.215 "nvme_adminq_poll_period_us": 10000, 00:17:54.215 "nvme_error_stat": false, 00:17:54.215 "nvme_ioq_poll_period_us": 0, 00:17:54.215 "rdma_cm_event_timeout_ms": 0, 00:17:54.215 "rdma_max_cq_size": 0, 00:17:54.215 "rdma_srq_size": 0, 00:17:54.215 "reconnect_delay_sec": 0, 00:17:54.215 "timeout_admin_us": 0, 00:17:54.215 "timeout_us": 0, 00:17:54.215 "transport_ack_timeout": 0, 00:17:54.215 "transport_retry_count": 4, 00:17:54.215 "transport_tos": 0 00:17:54.215 } 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "method": "bdev_nvme_set_hotplug", 00:17:54.215 "params": { 00:17:54.215 "enable": false, 00:17:54.215 "period_us": 100000 00:17:54.215 } 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "method": "bdev_malloc_create", 00:17:54.215 "params": { 00:17:54.215 "block_size": 4096, 00:17:54.215 "name": "malloc0", 00:17:54.215 "num_blocks": 8192, 00:17:54.215 "optimal_io_boundary": 0, 00:17:54.215 "physical_block_size": 4096, 00:17:54.215 "uuid": "44f72dc3-f739-4014-97be-f22fe6c1a4d6" 00:17:54.215 } 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "method": "bdev_wait_for_examine" 00:17:54.215 } 00:17:54.215 ] 00:17:54.215 }, 00:17:54.215 { 00:17:54.215 "subsystem": "nbd", 00:17:54.216 "config": [] 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "subsystem": "scheduler", 00:17:54.216 "config": [ 00:17:54.216 { 00:17:54.216 "method": "framework_set_scheduler", 00:17:54.216 "params": { 00:17:54.216 "name": "static" 00:17:54.216 } 00:17:54.216 } 00:17:54.216 ] 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "subsystem": "nvmf", 00:17:54.216 "config": [ 00:17:54.216 { 00:17:54.216 "method": "nvmf_set_config", 00:17:54.216 "params": { 00:17:54.216 "admin_cmd_passthru": { 00:17:54.216 "identify_ctrlr": false 00:17:54.216 }, 00:17:54.216 "discovery_filter": "match_any" 00:17:54.216 } 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "method": "nvmf_set_max_subsystems", 00:17:54.216 "params": { 00:17:54.216 "max_subsystems": 1024 00:17:54.216 } 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "method": "nvmf_set_crdt", 00:17:54.216 "params": { 00:17:54.216 "crdt1": 0, 00:17:54.216 "crdt2": 0, 00:17:54.216 "crdt3": 0 00:17:54.216 } 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "method": "nvmf_create_transport", 00:17:54.216 "params": { 00:17:54.216 "abort_timeout_sec": 1, 00:17:54.216 "ack_timeout": 0, 00:17:54.216 "buf_cache_size": 4294967295, 00:17:54.216 "c2h_success": false, 00:17:54.216 "data_wr_pool_size": 0, 00:17:54.216 "dif_insert_or_strip": false, 00:17:54.216 "in_capsule_data_size": 4096, 00:17:54.216 "io_unit_size": 131072, 00:17:54.216 "max_aq_depth": 128, 00:17:54.216 "max_io_qpairs_per_ctrlr": 127, 00:17:54.216 "max_io_size": 131072, 00:17:54.216 "max_queue_depth": 128, 00:17:54.216 "num_shared_buffers": 511, 00:17:54.216 "sock_priority": 0, 00:17:54.216 "trtype": "TCP", 00:17:54.216 "zcopy": false 00:17:54.216 } 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "method": "nvmf_create_subsystem", 00:17:54.216 "params": { 00:17:54.216 "allow_any_host": false, 00:17:54.216 "ana_reporting": false, 00:17:54.216 "max_cntlid": 65519, 00:17:54.216 "max_namespaces": 32, 00:17:54.216 "min_cntlid": 1, 00:17:54.216 "model_number": "SPDK bdev Controller", 00:17:54.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.216 "serial_number": "00000000000000000000" 00:17:54.216 } 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "method": "nvmf_subsystem_add_host", 00:17:54.216 "params": { 00:17:54.216 "host": "nqn.2016-06.io.spdk:host1", 00:17:54.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.216 "psk": "key0" 00:17:54.216 } 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "method": "nvmf_subsystem_add_ns", 00:17:54.216 "params": { 00:17:54.216 "namespace": { 00:17:54.216 "bdev_name": "malloc0", 00:17:54.216 "nguid": "44F72DC3F739401497BEF22FE6C1A4D6", 00:17:54.216 "no_auto_visible": false, 00:17:54.216 "nsid": 1, 00:17:54.216 "uuid": "44f72dc3-f739-4014-97be-f22fe6c1a4d6" 00:17:54.216 }, 00:17:54.216 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:54.216 } 00:17:54.216 }, 00:17:54.216 { 00:17:54.216 "method": "nvmf_subsystem_add_listener", 00:17:54.216 "params": { 00:17:54.216 "listen_address": { 00:17:54.216 "adrfam": "IPv4", 00:17:54.216 "traddr": "10.0.0.2", 00:17:54.216 "trsvcid": "4420", 00:17:54.216 "trtype": "TCP" 00:17:54.216 }, 00:17:54.216 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.216 "secure_channel": true 00:17:54.216 } 00:17:54.216 } 00:17:54.216 ] 00:17:54.216 } 00:17:54.216 ] 00:17:54.216 }' 00:17:54.216 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:54.216 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=84486 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 84486 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84486 ']' 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:54.475 18:31:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:54.475 [2024-05-13 18:31:10.213482] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:54.475 [2024-05-13 18:31:10.213599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.475 [2024-05-13 18:31:10.350317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.734 [2024-05-13 18:31:10.470390] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.734 [2024-05-13 18:31:10.470481] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.734 [2024-05-13 18:31:10.470494] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:54.734 [2024-05-13 18:31:10.470502] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:54.734 [2024-05-13 18:31:10.470510] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.734 [2024-05-13 18:31:10.470615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.992 [2024-05-13 18:31:10.703370] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:54.992 [2024-05-13 18:31:10.735276] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:54.992 [2024-05-13 18:31:10.735384] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.992 [2024-05-13 18:31:10.735560] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:55.251 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:55.251 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:55.251 18:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:55.251 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.251 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=84530 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 84530 /var/tmp/bdevperf.sock 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 84530 ']' 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:17:55.510 18:31:11 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:17:55.510 "subsystems": [ 00:17:55.510 { 00:17:55.510 "subsystem": "keyring", 00:17:55.510 "config": [ 00:17:55.510 { 00:17:55.510 "method": "keyring_file_add_key", 00:17:55.510 "params": { 00:17:55.510 "name": "key0", 00:17:55.510 "path": "/tmp/tmp.krxck21OLc" 00:17:55.510 } 00:17:55.510 } 00:17:55.510 ] 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "subsystem": "iobuf", 00:17:55.510 "config": [ 00:17:55.510 { 00:17:55.510 "method": "iobuf_set_options", 00:17:55.510 "params": { 00:17:55.510 "large_bufsize": 135168, 00:17:55.510 "large_pool_count": 1024, 00:17:55.510 "small_bufsize": 8192, 00:17:55.510 "small_pool_count": 8192 00:17:55.510 } 00:17:55.510 } 00:17:55.510 ] 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "subsystem": "sock", 00:17:55.510 "config": [ 00:17:55.510 { 00:17:55.510 "method": "sock_impl_set_options", 00:17:55.510 "params": { 00:17:55.510 "enable_ktls": false, 00:17:55.510 "enable_placement_id": 0, 00:17:55.510 "enable_quickack": false, 00:17:55.510 "enable_recv_pipe": true, 00:17:55.510 "enable_zerocopy_send_client": false, 00:17:55.510 "enable_zerocopy_send_server": true, 00:17:55.510 "impl_name": "posix", 00:17:55.510 "recv_buf_size": 2097152, 00:17:55.510 "send_buf_size": 2097152, 00:17:55.510 "tls_version": 0, 00:17:55.510 "zerocopy_threshold": 0 00:17:55.510 } 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "method": "sock_impl_set_options", 00:17:55.510 "params": { 00:17:55.510 "enable_ktls": false, 00:17:55.510 "enable_placement_id": 0, 00:17:55.510 "enable_quickack": false, 00:17:55.510 "enable_recv_pipe": true, 00:17:55.510 "enable_zerocopy_send_client": false, 00:17:55.510 "enable_zerocopy_send_server": true, 00:17:55.510 "impl_name": "ssl", 00:17:55.510 "recv_buf_size": 4096, 00:17:55.510 "send_buf_size": 4096, 00:17:55.510 "tls_version": 0, 00:17:55.510 "zerocopy_threshold": 0 00:17:55.510 } 00:17:55.510 } 00:17:55.510 ] 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "subsystem": "vmd", 00:17:55.510 "config": [] 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "subsystem": "accel", 00:17:55.510 "config": [ 00:17:55.510 { 00:17:55.510 "method": "accel_set_options", 00:17:55.510 "params": { 00:17:55.510 "buf_count": 2048, 00:17:55.510 "large_cache_size": 16, 00:17:55.510 "sequence_count": 2048, 00:17:55.510 "small_cache_size": 128, 00:17:55.510 "task_count": 2048 00:17:55.510 } 00:17:55.510 } 00:17:55.510 ] 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "subsystem": "bdev", 00:17:55.510 "config": [ 00:17:55.510 { 00:17:55.510 "method": "bdev_set_options", 00:17:55.510 "params": { 00:17:55.510 "bdev_auto_examine": true, 00:17:55.510 "bdev_io_cache_size": 256, 00:17:55.510 "bdev_io_pool_size": 65535, 00:17:55.510 "iobuf_large_cache_size": 16, 00:17:55.510 "iobuf_small_cache_size": 128 00:17:55.510 } 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "method": "bdev_raid_set_options", 00:17:55.510 "params": { 00:17:55.510 "process_window_size_kb": 1024 00:17:55.510 } 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "method": "bdev_iscsi_set_options", 00:17:55.510 "params": { 00:17:55.510 "timeout_sec": 30 00:17:55.510 } 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "method": "bdev_nvme_set_options", 00:17:55.510 "params": { 00:17:55.510 "action_on_timeout": "none", 00:17:55.510 "allow_accel_sequence": false, 00:17:55.510 "arbitration_burst": 0, 00:17:55.510 "bdev_retry_count": 3, 00:17:55.510 "ctrlr_loss_timeout_sec": 0, 00:17:55.510 "delay_cmd_submit": true, 00:17:55.510 "dhchap_dhgroups": [ 00:17:55.510 "null", 00:17:55.510 "ffdhe2048", 00:17:55.510 "ffdhe3072", 00:17:55.510 "ffdhe4096", 00:17:55.510 "ffdhe6144", 00:17:55.510 "ffdhe8192" 00:17:55.510 ], 00:17:55.510 "dhchap_digests": [ 00:17:55.510 "sha256", 00:17:55.510 "sha384", 00:17:55.510 "sha512" 00:17:55.510 ], 00:17:55.510 "disable_auto_failback": false, 00:17:55.510 "fast_io_fail_timeout_sec": 0, 00:17:55.510 "generate_uuids": false, 00:17:55.510 "high_priority_weight": 0, 00:17:55.510 "io_path_stat": false, 00:17:55.510 "io_queue_requests": 512, 00:17:55.510 "keep_alive_timeout_ms": 10000, 00:17:55.510 "low_priority_weight": 0, 00:17:55.510 "medium_priority_weight": 0, 00:17:55.510 "nvme_adminq_poll_period_us": 10000, 00:17:55.510 "nvme_error_stat": false, 00:17:55.510 "nvme_ioq_poll_period_us": 0, 00:17:55.510 "rdma_cm_event_timeout_ms": 0, 00:17:55.510 "rdma_max_cq_size": 0, 00:17:55.510 "rdma_srq_size": 0, 00:17:55.510 "reconnect_delay_sec": 0, 00:17:55.510 "timeout_admin_us": 0, 00:17:55.510 "timeout_us": 0, 00:17:55.510 "transport_ack_timeout": 0, 00:17:55.510 "transport_retry_count": 4, 00:17:55.510 "transport_tos": 0 00:17:55.510 } 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "method": "bdev_nvme_attach_controller", 00:17:55.510 "params": { 00:17:55.510 "adrfam": "IPv4", 00:17:55.510 "ctrlr_loss_timeout_sec": 0, 00:17:55.510 "ddgst": false, 00:17:55.510 "fast_io_fail_timeout_sec": 0, 00:17:55.510 "hdgst": false, 00:17:55.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:55.510 "name": "nvme0", 00:17:55.510 "prchk_guard": false, 00:17:55.510 "prchk_reftag": false, 00:17:55.510 "psk": "key0", 00:17:55.510 "reconnect_delay_sec": 0, 00:17:55.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:55.510 "traddr": "10.0.0.2", 00:17:55.510 "trsvcid": "4420", 00:17:55.510 "trtype": "TCP" 00:17:55.510 } 00:17:55.510 }, 00:17:55.510 { 00:17:55.510 "method": "bdev_nvme_set_hotplug", 00:17:55.510 "params": { 00:17:55.510 "enable": false, 00:17:55.510 "period_us": 100000 00:17:55.510 } 00:17:55.510 }, 00:17:55.510 { 00:17:55.511 "method": "bdev_enable_histogram", 00:17:55.511 "params": { 00:17:55.511 "enable": true, 00:17:55.511 "name": "nvme0n1" 00:17:55.511 } 00:17:55.511 }, 00:17:55.511 { 00:17:55.511 "method": "bdev_wait_for_examine" 00:17:55.511 } 00:17:55.511 ] 00:17:55.511 }, 00:17:55.511 { 00:17:55.511 "subsystem": "nbd", 00:17:55.511 "config": [] 00:17:55.511 } 00:17:55.511 ] 00:17:55.511 }' 00:17:55.511 [2024-05-13 18:31:11.265592] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:55.511 [2024-05-13 18:31:11.265692] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84530 ] 00:17:55.511 [2024-05-13 18:31:11.409143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.791 [2024-05-13 18:31:11.531895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.791 [2024-05-13 18:31:11.697491] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.414 18:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:56.414 18:31:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:17:56.414 18:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:56.414 18:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:17:56.673 18:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.673 18:31:12 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:56.931 Running I/O for 1 seconds... 00:17:57.866 00:17:57.866 Latency(us) 00:17:57.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.866 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:57.866 Verification LBA range: start 0x0 length 0x2000 00:17:57.866 nvme0n1 : 1.02 3717.55 14.52 0.00 0.00 34002.98 7000.44 35270.28 00:17:57.866 =================================================================================================================== 00:17:57.866 Total : 3717.55 14.52 0.00 0.00 34002.98 7000.44 35270.28 00:17:57.866 0 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:17:57.866 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:57.866 nvmf_trace.0 00:17:58.124 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:17:58.124 18:31:13 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 84530 00:17:58.124 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84530 ']' 00:17:58.124 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84530 00:17:58.124 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:58.124 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.125 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84530 00:17:58.125 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:58.125 killing process with pid 84530 00:17:58.125 Received shutdown signal, test time was about 1.000000 seconds 00:17:58.125 00:17:58.125 Latency(us) 00:17:58.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.125 =================================================================================================================== 00:17:58.125 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.125 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:58.125 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84530' 00:17:58.125 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84530 00:17:58.125 18:31:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84530 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:58.383 rmmod nvme_tcp 00:17:58.383 rmmod nvme_fabrics 00:17:58.383 rmmod nvme_keyring 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 84486 ']' 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 84486 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 84486 ']' 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 84486 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84486 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:58.383 killing process with pid 84486 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84486' 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 84486 00:17:58.383 [2024-05-13 18:31:14.235653] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:58.383 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 84486 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.hP1556dFj3 /tmp/tmp.aUJf53slrR /tmp/tmp.krxck21OLc 00:17:58.641 00:17:58.641 real 1m28.857s 00:17:58.641 user 2m20.307s 00:17:58.641 sys 0m29.277s 00:17:58.641 ************************************ 00:17:58.641 END TEST nvmf_tls 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:58.641 18:31:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.641 ************************************ 00:17:58.964 18:31:14 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:58.964 18:31:14 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:58.964 18:31:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:58.964 18:31:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:58.964 ************************************ 00:17:58.964 START TEST nvmf_fips 00:17:58.964 ************************************ 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:58.964 * Looking for test storage... 00:17:58.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.964 18:31:14 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:17:58.965 Error setting digest 00:17:58.965 0032D91FA37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:58.965 0032D91FA37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:58.965 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@432 -- # nvmf_veth_init 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:58.966 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:59.225 Cannot find device "nvmf_tgt_br" 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@155 -- # true 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.225 Cannot find device "nvmf_tgt_br2" 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@156 -- # true 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:59.225 Cannot find device "nvmf_tgt_br" 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@158 -- # true 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:59.225 Cannot find device "nvmf_tgt_br2" 00:17:59.225 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@159 -- # true 00:17:59.226 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:59.226 18:31:14 nvmf_tcp.nvmf_fips -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@162 -- # true 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.226 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@163 -- # true 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:59.226 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:59.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:17:59.486 00:17:59.486 --- 10.0.0.2 ping statistics --- 00:17:59.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.486 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:59.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:59.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:59.486 00:17:59.486 --- 10.0.0.3 ping statistics --- 00:17:59.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.486 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:59.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:17:59.486 00:17:59.486 --- 10.0.0.1 ping statistics --- 00:17:59.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.486 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@433 -- # return 0 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=84816 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 84816 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 84816 ']' 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:59.486 18:31:15 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:17:59.486 [2024-05-13 18:31:15.320244] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:17:59.486 [2024-05-13 18:31:15.320357] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.744 [2024-05-13 18:31:15.462698] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.744 [2024-05-13 18:31:15.596314] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.744 [2024-05-13 18:31:15.596376] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.744 [2024-05-13 18:31:15.596389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.744 [2024-05-13 18:31:15.596400] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.744 [2024-05-13 18:31:15.596409] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.744 [2024-05-13 18:31:15.596437] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:00.677 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:00.936 [2024-05-13 18:31:16.657850] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.936 [2024-05-13 18:31:16.673762] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:00.936 [2024-05-13 18:31:16.673828] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.936 [2024-05-13 18:31:16.674001] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.936 [2024-05-13 18:31:16.705428] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:00.936 malloc0 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=84868 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 84868 /var/tmp/bdevperf.sock 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 84868 ']' 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.936 18:31:16 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:00.936 [2024-05-13 18:31:16.818230] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:18:00.936 [2024-05-13 18:31:16.818333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84868 ] 00:18:01.195 [2024-05-13 18:31:16.960736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.195 [2024-05-13 18:31:17.094679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.128 18:31:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:02.128 18:31:17 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:18:02.128 18:31:17 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:02.128 [2024-05-13 18:31:18.022223] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.128 [2024-05-13 18:31:18.022335] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.387 TLSTESTn1 00:18:02.387 18:31:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:02.387 Running I/O for 10 seconds... 00:18:12.439 00:18:12.439 Latency(us) 00:18:12.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.439 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:12.439 Verification LBA range: start 0x0 length 0x2000 00:18:12.439 TLSTESTn1 : 10.02 3923.51 15.33 0.00 0.00 32556.33 863.88 20733.21 00:18:12.439 =================================================================================================================== 00:18:12.439 Total : 3923.51 15.33 0.00 0.00 32556.33 863.88 20733.21 00:18:12.439 0 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:12.439 nvmf_trace.0 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84868 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 84868 ']' 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 84868 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:12.439 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84868 00:18:12.698 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:18:12.698 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:18:12.698 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84868' 00:18:12.698 killing process with pid 84868 00:18:12.698 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 84868 00:18:12.698 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.698 00:18:12.698 Latency(us) 00:18:12.698 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.698 =================================================================================================================== 00:18:12.698 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.698 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 84868 00:18:12.698 [2024-05-13 18:31:28.390824] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.956 rmmod nvme_tcp 00:18:12.956 rmmod nvme_fabrics 00:18:12.956 rmmod nvme_keyring 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 84816 ']' 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 84816 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 84816 ']' 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 84816 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84816 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84816' 00:18:12.956 killing process with pid 84816 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 84816 00:18:12.956 [2024-05-13 18:31:28.772590] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:12.956 [2024-05-13 18:31:28.772637] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:12.956 18:31:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 84816 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:18:13.214 00:18:13.214 real 0m14.475s 00:18:13.214 user 0m19.307s 00:18:13.214 sys 0m6.076s 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:13.214 ************************************ 00:18:13.214 END TEST nvmf_fips 00:18:13.214 ************************************ 00:18:13.214 18:31:29 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:18:13.214 18:31:29 nvmf_tcp -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:13.214 18:31:29 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:13.214 18:31:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:13.214 18:31:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:13.214 ************************************ 00:18:13.214 START TEST nvmf_fuzz 00:18:13.214 ************************************ 00:18:13.214 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:13.472 * Looking for test storage... 00:18:13.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:18:13.472 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:13.473 Cannot find device "nvmf_tgt_br" 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@155 -- # true 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:13.473 Cannot find device "nvmf_tgt_br2" 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@156 -- # true 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:13.473 Cannot find device "nvmf_tgt_br" 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@158 -- # true 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:13.473 Cannot find device "nvmf_tgt_br2" 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@159 -- # true 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:13.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:13.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:13.473 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:13.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:18:13.731 00:18:13.731 --- 10.0.0.2 ping statistics --- 00:18:13.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.731 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:13.731 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:13.731 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:18:13.731 00:18:13.731 --- 10.0.0.3 ping statistics --- 00:18:13.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.731 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:13.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:18:13.731 00:18:13.731 --- 10.0.0.1 ping statistics --- 00:18:13.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.731 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@433 -- # return 0 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85215 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85215 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@827 -- # '[' -z 85215 ']' 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:13.731 18:31:29 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@860 -- # return 0 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.108 Malloc0 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:15.108 18:31:30 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:15.367 Shutting down the fuzz application 00:18:15.367 18:31:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:15.935 Shutting down the fuzz application 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:15.935 rmmod nvme_tcp 00:18:15.935 rmmod nvme_fabrics 00:18:15.935 rmmod nvme_keyring 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 85215 ']' 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 85215 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@946 -- # '[' -z 85215 ']' 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@950 -- # kill -0 85215 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # uname 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85215 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:15.935 killing process with pid 85215 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85215' 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@965 -- # kill 85215 00:18:15.935 18:31:31 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@970 -- # wait 85215 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:16.193 00:18:16.193 real 0m2.950s 00:18:16.193 user 0m3.164s 00:18:16.193 sys 0m0.655s 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:16.193 ************************************ 00:18:16.193 END TEST nvmf_fuzz 00:18:16.193 ************************************ 00:18:16.193 18:31:32 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:16.193 18:31:32 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:16.193 18:31:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:16.193 18:31:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:16.193 18:31:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:16.193 ************************************ 00:18:16.193 START TEST nvmf_multiconnection 00:18:16.193 ************************************ 00:18:16.193 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:16.452 * Looking for test storage... 00:18:16.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@432 -- # nvmf_veth_init 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:16.453 Cannot find device "nvmf_tgt_br" 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@155 -- # true 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:16.453 Cannot find device "nvmf_tgt_br2" 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@156 -- # true 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:16.453 Cannot find device "nvmf_tgt_br" 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@158 -- # true 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:16.453 Cannot find device "nvmf_tgt_br2" 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@159 -- # true 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:16.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:16.453 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:16.453 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:16.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:18:16.712 00:18:16.712 --- 10.0.0.2 ping statistics --- 00:18:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.712 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:16.712 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:16.712 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:18:16.712 00:18:16.712 --- 10.0.0.3 ping statistics --- 00:18:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.712 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:16.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:16.712 00:18:16.712 --- 10.0.0.1 ping statistics --- 00:18:16.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.712 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@433 -- # return 0 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=85428 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 85428 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@827 -- # '[' -z 85428 ']' 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:16.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:16.712 18:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:16.712 [2024-05-13 18:31:32.642413] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:18:16.712 [2024-05-13 18:31:32.642535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.971 [2024-05-13 18:31:32.784833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.229 [2024-05-13 18:31:32.916033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.229 [2024-05-13 18:31:32.916084] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.229 [2024-05-13 18:31:32.916098] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.229 [2024-05-13 18:31:32.916109] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.229 [2024-05-13 18:31:32.916117] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.229 [2024-05-13 18:31:32.916663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.229 [2024-05-13 18:31:32.916763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.229 [2024-05-13 18:31:32.916874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.229 [2024-05-13 18:31:32.916931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@860 -- # return 0 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.795 [2024-05-13 18:31:33.640098] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.795 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 Malloc1 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:17.796 [2024-05-13 18:31:33.710755] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:17.796 [2024-05-13 18:31:33.711344] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.796 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 Malloc2 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 Malloc3 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 Malloc4 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.054 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 Malloc5 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 Malloc6 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 Malloc7 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.055 18:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 Malloc8 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 Malloc9 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.313 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 Malloc10 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 Malloc11 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:18.314 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.572 18:31:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:18.572 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:18.572 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.572 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:18.572 18:31:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK1 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:20.474 18:31:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:20.733 18:31:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:20.733 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:20.733 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:20.733 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:20.733 18:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK2 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:22.634 18:31:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:22.892 18:31:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:22.892 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:22.892 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:22.892 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:22.892 18:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:24.793 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:24.793 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:24.793 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK3 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:25.052 18:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK4 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:27.583 18:31:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:27.583 18:31:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:27.583 18:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:27.583 18:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.583 18:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:27.583 18:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK5 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:29.484 18:31:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:31.385 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:31.385 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:31.385 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK6 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:31.643 18:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:34.172 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK7 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:34.173 18:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK8 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:36.072 18:31:51 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK9 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:37.974 18:31:53 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:38.232 18:31:54 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:38.232 18:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:38.232 18:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:38.232 18:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:38.232 18:31:54 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK10 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1194 -- # local i=0 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:18:40.763 18:31:56 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1201 -- # sleep 2 00:18:42.664 18:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:18:42.664 18:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:18:42.664 18:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # grep -c SPDK11 00:18:42.664 18:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:18:42.664 18:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:18:42.664 18:31:58 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # return 0 00:18:42.664 18:31:58 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:42.664 [global] 00:18:42.664 thread=1 00:18:42.664 invalidate=1 00:18:42.664 rw=read 00:18:42.664 time_based=1 00:18:42.664 runtime=10 00:18:42.664 ioengine=libaio 00:18:42.664 direct=1 00:18:42.664 bs=262144 00:18:42.664 iodepth=64 00:18:42.664 norandommap=1 00:18:42.664 numjobs=1 00:18:42.664 00:18:42.664 [job0] 00:18:42.664 filename=/dev/nvme0n1 00:18:42.664 [job1] 00:18:42.664 filename=/dev/nvme10n1 00:18:42.664 [job2] 00:18:42.664 filename=/dev/nvme1n1 00:18:42.664 [job3] 00:18:42.664 filename=/dev/nvme2n1 00:18:42.664 [job4] 00:18:42.664 filename=/dev/nvme3n1 00:18:42.664 [job5] 00:18:42.664 filename=/dev/nvme4n1 00:18:42.664 [job6] 00:18:42.664 filename=/dev/nvme5n1 00:18:42.664 [job7] 00:18:42.664 filename=/dev/nvme6n1 00:18:42.664 [job8] 00:18:42.664 filename=/dev/nvme7n1 00:18:42.664 [job9] 00:18:42.664 filename=/dev/nvme8n1 00:18:42.664 [job10] 00:18:42.664 filename=/dev/nvme9n1 00:18:42.664 Could not set queue depth (nvme0n1) 00:18:42.664 Could not set queue depth (nvme10n1) 00:18:42.664 Could not set queue depth (nvme1n1) 00:18:42.664 Could not set queue depth (nvme2n1) 00:18:42.664 Could not set queue depth (nvme3n1) 00:18:42.664 Could not set queue depth (nvme4n1) 00:18:42.664 Could not set queue depth (nvme5n1) 00:18:42.664 Could not set queue depth (nvme6n1) 00:18:42.664 Could not set queue depth (nvme7n1) 00:18:42.664 Could not set queue depth (nvme8n1) 00:18:42.664 Could not set queue depth (nvme9n1) 00:18:42.664 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:42.664 fio-3.35 00:18:42.664 Starting 11 threads 00:18:54.870 00:18:54.870 job0: (groupid=0, jobs=1): err= 0: pid=85899: Mon May 13 18:32:09 2024 00:18:54.870 read: IOPS=1237, BW=309MiB/s (324MB/s)(3118MiB/10082msec) 00:18:54.870 slat (usec): min=15, max=90287, avg=796.03, stdev=3815.09 00:18:54.870 clat (msec): min=12, max=236, avg=50.82, stdev=37.42 00:18:54.870 lat (msec): min=12, max=258, avg=51.62, stdev=38.10 00:18:54.870 clat percentiles (msec): 00:18:54.870 | 1.00th=[ 18], 5.00th=[ 22], 10.00th=[ 24], 20.00th=[ 27], 00:18:54.870 | 30.00th=[ 29], 40.00th=[ 32], 50.00th=[ 34], 60.00th=[ 37], 00:18:54.870 | 70.00th=[ 51], 80.00th=[ 83], 90.00th=[ 109], 95.00th=[ 140], 00:18:54.870 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 197], 00:18:54.870 | 99.99th=[ 236] 00:18:54.870 bw ( KiB/s): min=94208, max=555520, per=15.50%, avg=317559.05, stdev=194188.84, samples=20 00:18:54.870 iops : min= 368, max= 2170, avg=1240.45, stdev=758.55, samples=20 00:18:54.870 lat (msec) : 20=3.54%, 50=66.30%, 100=18.07%, 250=12.09% 00:18:54.870 cpu : usr=0.42%, sys=3.51%, ctx=2465, majf=0, minf=4097 00:18:54.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.870 issued rwts: total=12472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.870 job1: (groupid=0, jobs=1): err= 0: pid=85900: Mon May 13 18:32:09 2024 00:18:54.870 read: IOPS=668, BW=167MiB/s (175MB/s)(1688MiB/10100msec) 00:18:54.870 slat (usec): min=18, max=86592, avg=1453.73, stdev=5693.89 00:18:54.870 clat (usec): min=1280, max=205736, avg=94159.32, stdev=47374.49 00:18:54.870 lat (usec): min=1315, max=252645, avg=95613.05, stdev=48383.27 00:18:54.870 clat percentiles (msec): 00:18:54.870 | 1.00th=[ 3], 5.00th=[ 20], 10.00th=[ 26], 20.00th=[ 32], 00:18:54.870 | 30.00th=[ 42], 40.00th=[ 110], 50.00th=[ 115], 60.00th=[ 121], 00:18:54.870 | 70.00th=[ 125], 80.00th=[ 129], 90.00th=[ 136], 95.00th=[ 148], 00:18:54.870 | 99.00th=[ 180], 99.50th=[ 184], 99.90th=[ 190], 99.95th=[ 190], 00:18:54.870 | 99.99th=[ 207] 00:18:54.870 bw ( KiB/s): min=108544, max=575488, per=8.35%, avg=171199.20, stdev=116326.09, samples=20 00:18:54.870 iops : min= 424, max= 2248, avg=668.70, stdev=454.41, samples=20 00:18:54.870 lat (msec) : 2=0.12%, 4=1.42%, 10=2.47%, 20=1.05%, 50=25.18% 00:18:54.870 lat (msec) : 100=1.88%, 250=67.88% 00:18:54.870 cpu : usr=0.28%, sys=2.02%, ctx=1366, majf=0, minf=4097 00:18:54.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.870 issued rwts: total=6752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.870 job2: (groupid=0, jobs=1): err= 0: pid=85901: Mon May 13 18:32:09 2024 00:18:54.870 read: IOPS=729, BW=182MiB/s (191MB/s)(1837MiB/10069msec) 00:18:54.870 slat (usec): min=17, max=100671, avg=1343.93, stdev=5151.87 00:18:54.870 clat (usec): min=1113, max=185145, avg=86186.65, stdev=27645.10 00:18:54.870 lat (usec): min=1162, max=251262, avg=87530.57, stdev=28391.51 00:18:54.870 clat percentiles (msec): 00:18:54.870 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 55], 20.00th=[ 70], 00:18:54.870 | 30.00th=[ 81], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 90], 00:18:54.870 | 70.00th=[ 94], 80.00th=[ 99], 90.00th=[ 113], 95.00th=[ 138], 00:18:54.870 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 186], 99.95th=[ 186], 00:18:54.870 | 99.99th=[ 186] 00:18:54.870 bw ( KiB/s): min=96768, max=383488, per=9.10%, avg=186467.00, stdev=60328.68, samples=20 00:18:54.870 iops : min= 378, max= 1498, avg=728.20, stdev=235.61, samples=20 00:18:54.870 lat (msec) : 2=0.04%, 4=0.39%, 10=0.08%, 20=1.27%, 50=5.09% 00:18:54.870 lat (msec) : 100=75.15%, 250=17.98% 00:18:54.870 cpu : usr=0.34%, sys=2.26%, ctx=1487, majf=0, minf=4097 00:18:54.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.870 issued rwts: total=7347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.870 job3: (groupid=0, jobs=1): err= 0: pid=85907: Mon May 13 18:32:09 2024 00:18:54.870 read: IOPS=697, BW=174MiB/s (183MB/s)(1757MiB/10071msec) 00:18:54.870 slat (usec): min=18, max=65686, avg=1401.90, stdev=4906.89 00:18:54.870 clat (msec): min=24, max=155, avg=90.21, stdev=18.47 00:18:54.870 lat (msec): min=25, max=186, avg=91.61, stdev=19.18 00:18:54.870 clat percentiles (msec): 00:18:54.870 | 1.00th=[ 41], 5.00th=[ 58], 10.00th=[ 64], 20.00th=[ 80], 00:18:54.870 | 30.00th=[ 85], 40.00th=[ 88], 50.00th=[ 91], 60.00th=[ 94], 00:18:54.870 | 70.00th=[ 99], 80.00th=[ 103], 90.00th=[ 113], 95.00th=[ 123], 00:18:54.870 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 155], 99.95th=[ 155], 00:18:54.870 | 99.99th=[ 157] 00:18:54.870 bw ( KiB/s): min=130048, max=260608, per=8.70%, avg=178213.10, stdev=25994.33, samples=20 00:18:54.870 iops : min= 508, max= 1018, avg=696.10, stdev=101.61, samples=20 00:18:54.870 lat (msec) : 50=1.89%, 100=74.79%, 250=23.31% 00:18:54.870 cpu : usr=0.37%, sys=2.64%, ctx=1491, majf=0, minf=4097 00:18:54.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.870 issued rwts: total=7026,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.870 job4: (groupid=0, jobs=1): err= 0: pid=85909: Mon May 13 18:32:09 2024 00:18:54.870 read: IOPS=546, BW=137MiB/s (143MB/s)(1377MiB/10085msec) 00:18:54.870 slat (usec): min=17, max=75188, avg=1809.82, stdev=6562.10 00:18:54.870 clat (msec): min=24, max=195, avg=115.15, stdev=15.05 00:18:54.870 lat (msec): min=24, max=195, avg=116.96, stdev=16.23 00:18:54.870 clat percentiles (msec): 00:18:54.870 | 1.00th=[ 74], 5.00th=[ 90], 10.00th=[ 99], 20.00th=[ 107], 00:18:54.870 | 30.00th=[ 110], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 118], 00:18:54.870 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 136], 00:18:54.870 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 192], 99.95th=[ 192], 00:18:54.870 | 99.99th=[ 197] 00:18:54.870 bw ( KiB/s): min=128000, max=175616, per=6.80%, avg=139339.40, stdev=10066.74, samples=20 00:18:54.870 iops : min= 500, max= 686, avg=544.20, stdev=39.37, samples=20 00:18:54.870 lat (msec) : 50=0.35%, 100=10.68%, 250=88.98% 00:18:54.870 cpu : usr=0.17%, sys=1.80%, ctx=1248, majf=0, minf=4097 00:18:54.870 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:54.870 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.870 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.870 issued rwts: total=5507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.870 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.870 job5: (groupid=0, jobs=1): err= 0: pid=85910: Mon May 13 18:32:09 2024 00:18:54.870 read: IOPS=1034, BW=259MiB/s (271MB/s)(2622MiB/10137msec) 00:18:54.870 slat (usec): min=13, max=108855, avg=907.47, stdev=4398.07 00:18:54.870 clat (msec): min=7, max=246, avg=60.85, stdev=43.29 00:18:54.870 lat (msec): min=7, max=286, avg=61.76, stdev=44.00 00:18:54.870 clat percentiles (msec): 00:18:54.870 | 1.00th=[ 16], 5.00th=[ 21], 10.00th=[ 24], 20.00th=[ 27], 00:18:54.870 | 30.00th=[ 30], 40.00th=[ 33], 50.00th=[ 36], 60.00th=[ 72], 00:18:54.870 | 70.00th=[ 87], 80.00th=[ 95], 90.00th=[ 113], 95.00th=[ 150], 00:18:54.870 | 99.00th=[ 182], 99.50th=[ 220], 99.90th=[ 247], 99.95th=[ 247], 00:18:54.870 | 99.99th=[ 247] 00:18:54.871 bw ( KiB/s): min=85504, max=554496, per=13.02%, avg=266805.80, stdev=180208.18, samples=20 00:18:54.871 iops : min= 334, max= 2166, avg=1042.20, stdev=703.93, samples=20 00:18:54.871 lat (msec) : 10=0.13%, 20=3.62%, 50=55.12%, 100=24.86%, 250=16.27% 00:18:54.871 cpu : usr=0.37%, sys=3.37%, ctx=2041, majf=0, minf=4097 00:18:54.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.871 issued rwts: total=10487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.871 job6: (groupid=0, jobs=1): err= 0: pid=85911: Mon May 13 18:32:09 2024 00:18:54.871 read: IOPS=541, BW=135MiB/s (142MB/s)(1366MiB/10084msec) 00:18:54.871 slat (usec): min=17, max=73746, avg=1824.36, stdev=6750.81 00:18:54.871 clat (msec): min=15, max=180, avg=116.08, stdev=14.35 00:18:54.871 lat (msec): min=15, max=211, avg=117.90, stdev=15.74 00:18:54.871 clat percentiles (msec): 00:18:54.871 | 1.00th=[ 81], 5.00th=[ 93], 10.00th=[ 101], 20.00th=[ 108], 00:18:54.871 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 121], 00:18:54.871 | 70.00th=[ 123], 80.00th=[ 127], 90.00th=[ 131], 95.00th=[ 136], 00:18:54.871 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 182], 00:18:54.871 | 99.99th=[ 182] 00:18:54.871 bw ( KiB/s): min=123392, max=173056, per=6.74%, avg=138202.85, stdev=11505.78, samples=20 00:18:54.871 iops : min= 482, max= 676, avg=539.80, stdev=44.98, samples=20 00:18:54.871 lat (msec) : 20=0.15%, 50=0.44%, 100=9.59%, 250=89.82% 00:18:54.871 cpu : usr=0.12%, sys=1.83%, ctx=963, majf=0, minf=4097 00:18:54.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.871 issued rwts: total=5463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.871 job7: (groupid=0, jobs=1): err= 0: pid=85912: Mon May 13 18:32:09 2024 00:18:54.871 read: IOPS=524, BW=131MiB/s (137MB/s)(1323MiB/10097msec) 00:18:54.871 slat (usec): min=13, max=98121, avg=1844.81, stdev=6565.26 00:18:54.871 clat (msec): min=18, max=242, avg=120.03, stdev=21.24 00:18:54.871 lat (msec): min=18, max=258, avg=121.87, stdev=22.31 00:18:54.871 clat percentiles (msec): 00:18:54.871 | 1.00th=[ 27], 5.00th=[ 94], 10.00th=[ 102], 20.00th=[ 110], 00:18:54.871 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 121], 60.00th=[ 124], 00:18:54.871 | 70.00th=[ 127], 80.00th=[ 130], 90.00th=[ 138], 95.00th=[ 159], 00:18:54.871 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 209], 99.95th=[ 209], 00:18:54.871 | 99.99th=[ 243] 00:18:54.871 bw ( KiB/s): min=92998, max=167936, per=6.53%, avg=133877.65, stdev=13079.94, samples=20 00:18:54.871 iops : min= 363, max= 656, avg=522.90, stdev=51.12, samples=20 00:18:54.871 lat (msec) : 20=0.06%, 50=1.57%, 100=7.69%, 250=90.69% 00:18:54.871 cpu : usr=0.21%, sys=1.70%, ctx=1133, majf=0, minf=4097 00:18:54.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.871 issued rwts: total=5293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.871 job8: (groupid=0, jobs=1): err= 0: pid=85913: Mon May 13 18:32:09 2024 00:18:54.871 read: IOPS=660, BW=165MiB/s (173MB/s)(1666MiB/10093msec) 00:18:54.871 slat (usec): min=16, max=62377, avg=1471.02, stdev=5215.87 00:18:54.871 clat (msec): min=26, max=216, avg=95.23, stdev=24.79 00:18:54.871 lat (msec): min=26, max=216, avg=96.70, stdev=25.46 00:18:54.871 clat percentiles (msec): 00:18:54.871 | 1.00th=[ 47], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 80], 00:18:54.871 | 30.00th=[ 86], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 96], 00:18:54.871 | 70.00th=[ 103], 80.00th=[ 115], 90.00th=[ 130], 95.00th=[ 144], 00:18:54.871 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 218], 99.95th=[ 218], 00:18:54.871 | 99.99th=[ 218] 00:18:54.871 bw ( KiB/s): min=106496, max=267776, per=8.24%, avg=168934.90, stdev=39715.18, samples=20 00:18:54.871 iops : min= 416, max= 1046, avg=659.90, stdev=155.13, samples=20 00:18:54.871 lat (msec) : 50=1.74%, 100=65.05%, 250=33.21% 00:18:54.871 cpu : usr=0.34%, sys=2.15%, ctx=1139, majf=0, minf=4097 00:18:54.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.871 issued rwts: total=6663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.871 job9: (groupid=0, jobs=1): err= 0: pid=85914: Mon May 13 18:32:09 2024 00:18:54.871 read: IOPS=655, BW=164MiB/s (172MB/s)(1649MiB/10070msec) 00:18:54.871 slat (usec): min=18, max=126507, avg=1495.56, stdev=5546.32 00:18:54.871 clat (msec): min=12, max=195, avg=96.01, stdev=29.79 00:18:54.871 lat (msec): min=13, max=286, avg=97.50, stdev=30.57 00:18:54.871 clat percentiles (msec): 00:18:54.871 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 77], 20.00th=[ 84], 00:18:54.871 | 30.00th=[ 87], 40.00th=[ 90], 50.00th=[ 93], 60.00th=[ 96], 00:18:54.871 | 70.00th=[ 101], 80.00th=[ 111], 90.00th=[ 136], 95.00th=[ 155], 00:18:54.871 | 99.00th=[ 180], 99.50th=[ 192], 99.90th=[ 194], 99.95th=[ 194], 00:18:54.871 | 99.99th=[ 194] 00:18:54.871 bw ( KiB/s): min=96574, max=296960, per=8.16%, avg=167194.90, stdev=40668.09, samples=20 00:18:54.871 iops : min= 377, max= 1160, avg=653.05, stdev=158.91, samples=20 00:18:54.871 lat (msec) : 20=0.61%, 50=7.00%, 100=62.31%, 250=30.08% 00:18:54.871 cpu : usr=0.27%, sys=2.13%, ctx=1245, majf=0, minf=4097 00:18:54.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.871 issued rwts: total=6596,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.871 job10: (groupid=0, jobs=1): err= 0: pid=85916: Mon May 13 18:32:09 2024 00:18:54.871 read: IOPS=748, BW=187MiB/s (196MB/s)(1886MiB/10080msec) 00:18:54.871 slat (usec): min=13, max=151398, avg=1289.35, stdev=5113.96 00:18:54.871 clat (msec): min=17, max=213, avg=84.13, stdev=38.11 00:18:54.871 lat (msec): min=17, max=299, avg=85.42, stdev=38.89 00:18:54.871 clat percentiles (msec): 00:18:54.871 | 1.00th=[ 21], 5.00th=[ 23], 10.00th=[ 26], 20.00th=[ 33], 00:18:54.871 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 92], 60.00th=[ 96], 00:18:54.871 | 70.00th=[ 102], 80.00th=[ 112], 90.00th=[ 127], 95.00th=[ 142], 00:18:54.871 | 99.00th=[ 169], 99.50th=[ 199], 99.90th=[ 203], 99.95th=[ 211], 00:18:54.871 | 99.99th=[ 213] 00:18:54.871 bw ( KiB/s): min=107008, max=522240, per=9.34%, avg=191498.60, stdev=110645.27, samples=20 00:18:54.871 iops : min= 418, max= 2040, avg=748.00, stdev=432.22, samples=20 00:18:54.871 lat (msec) : 20=0.98%, 50=24.88%, 100=41.68%, 250=32.46% 00:18:54.871 cpu : usr=0.26%, sys=2.48%, ctx=1590, majf=0, minf=4097 00:18:54.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:54.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:54.871 issued rwts: total=7545,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.871 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:54.871 00:18:54.871 Run status group 0 (all jobs): 00:18:54.871 READ: bw=2001MiB/s (2099MB/s), 131MiB/s-309MiB/s (137MB/s-324MB/s), io=19.8GiB (21.3GB), run=10069-10137msec 00:18:54.871 00:18:54.871 Disk stats (read/write): 00:18:54.871 nvme0n1: ios=24799/0, merge=0/0, ticks=1224973/0, in_queue=1224973, util=96.91% 00:18:54.871 nvme10n1: ios=13339/0, merge=0/0, ticks=1234265/0, in_queue=1234265, util=97.35% 00:18:54.871 nvme1n1: ios=14509/0, merge=0/0, ticks=1232907/0, in_queue=1232907, util=97.64% 00:18:54.871 nvme2n1: ios=13895/0, merge=0/0, ticks=1236141/0, in_queue=1236141, util=98.08% 00:18:54.871 nvme3n1: ios=10829/0, merge=0/0, ticks=1231781/0, in_queue=1231781, util=97.64% 00:18:54.871 nvme4n1: ios=20847/0, merge=0/0, ticks=1225736/0, in_queue=1225736, util=98.05% 00:18:54.871 nvme5n1: ios=10773/0, merge=0/0, ticks=1235657/0, in_queue=1235657, util=98.21% 00:18:54.871 nvme6n1: ios=10433/0, merge=0/0, ticks=1234316/0, in_queue=1234316, util=98.23% 00:18:54.871 nvme7n1: ios=13198/0, merge=0/0, ticks=1235773/0, in_queue=1235773, util=98.50% 00:18:54.871 nvme8n1: ios=13023/0, merge=0/0, ticks=1236202/0, in_queue=1236202, util=98.70% 00:18:54.871 nvme9n1: ios=14902/0, merge=0/0, ticks=1229818/0, in_queue=1229818, util=98.61% 00:18:54.871 18:32:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:54.871 [global] 00:18:54.871 thread=1 00:18:54.871 invalidate=1 00:18:54.871 rw=randwrite 00:18:54.871 time_based=1 00:18:54.871 runtime=10 00:18:54.871 ioengine=libaio 00:18:54.871 direct=1 00:18:54.871 bs=262144 00:18:54.871 iodepth=64 00:18:54.872 norandommap=1 00:18:54.872 numjobs=1 00:18:54.872 00:18:54.872 [job0] 00:18:54.872 filename=/dev/nvme0n1 00:18:54.872 [job1] 00:18:54.872 filename=/dev/nvme10n1 00:18:54.872 [job2] 00:18:54.872 filename=/dev/nvme1n1 00:18:54.872 [job3] 00:18:54.872 filename=/dev/nvme2n1 00:18:54.872 [job4] 00:18:54.872 filename=/dev/nvme3n1 00:18:54.872 [job5] 00:18:54.872 filename=/dev/nvme4n1 00:18:54.872 [job6] 00:18:54.872 filename=/dev/nvme5n1 00:18:54.872 [job7] 00:18:54.872 filename=/dev/nvme6n1 00:18:54.872 [job8] 00:18:54.872 filename=/dev/nvme7n1 00:18:54.872 [job9] 00:18:54.872 filename=/dev/nvme8n1 00:18:54.872 [job10] 00:18:54.872 filename=/dev/nvme9n1 00:18:54.872 Could not set queue depth (nvme0n1) 00:18:54.872 Could not set queue depth (nvme10n1) 00:18:54.872 Could not set queue depth (nvme1n1) 00:18:54.872 Could not set queue depth (nvme2n1) 00:18:54.872 Could not set queue depth (nvme3n1) 00:18:54.872 Could not set queue depth (nvme4n1) 00:18:54.872 Could not set queue depth (nvme5n1) 00:18:54.872 Could not set queue depth (nvme6n1) 00:18:54.872 Could not set queue depth (nvme7n1) 00:18:54.872 Could not set queue depth (nvme8n1) 00:18:54.872 Could not set queue depth (nvme9n1) 00:18:54.872 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:54.872 fio-3.35 00:18:54.872 Starting 11 threads 00:19:04.888 00:19:04.888 job0: (groupid=0, jobs=1): err= 0: pid=86107: Mon May 13 18:32:19 2024 00:19:04.888 write: IOPS=355, BW=88.8MiB/s (93.1MB/s)(901MiB/10150msec); 0 zone resets 00:19:04.888 slat (usec): min=17, max=88024, avg=2728.39, stdev=7248.63 00:19:04.888 clat (msec): min=19, max=662, avg=177.40, stdev=109.75 00:19:04.888 lat (msec): min=20, max=662, avg=180.12, stdev=111.25 00:19:04.888 clat percentiles (msec): 00:19:04.888 | 1.00th=[ 30], 5.00th=[ 70], 10.00th=[ 73], 20.00th=[ 75], 00:19:04.888 | 30.00th=[ 123], 40.00th=[ 140], 50.00th=[ 150], 60.00th=[ 174], 00:19:04.888 | 70.00th=[ 190], 80.00th=[ 234], 90.00th=[ 347], 95.00th=[ 426], 00:19:04.888 | 99.00th=[ 531], 99.50th=[ 600], 99.90th=[ 642], 99.95th=[ 659], 00:19:04.888 | 99.99th=[ 659] 00:19:04.888 bw ( KiB/s): min=35328, max=237568, per=6.28%, avg=90665.70, stdev=53430.88, samples=20 00:19:04.888 iops : min= 138, max= 928, avg=354.15, stdev=208.71, samples=20 00:19:04.888 lat (msec) : 20=0.03%, 50=2.83%, 100=22.08%, 250=57.86%, 500=15.70% 00:19:04.888 lat (msec) : 750=1.50% 00:19:04.888 cpu : usr=0.68%, sys=0.76%, ctx=5014, majf=0, minf=1 00:19:04.888 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,3605,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job1: (groupid=0, jobs=1): err= 0: pid=86108: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=1239, BW=310MiB/s (325MB/s)(3138MiB/10127msec); 0 zone resets 00:19:04.889 slat (usec): min=15, max=12924, avg=773.98, stdev=1571.64 00:19:04.889 clat (msec): min=2, max=258, avg=50.85, stdev=28.88 00:19:04.889 lat (msec): min=2, max=258, avg=51.62, stdev=29.28 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 26], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 39], 00:19:04.889 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 41], 60.00th=[ 42], 00:19:04.889 | 70.00th=[ 43], 80.00th=[ 45], 90.00th=[ 87], 95.00th=[ 120], 00:19:04.889 | 99.00th=[ 165], 99.50th=[ 174], 99.90th=[ 232], 99.95th=[ 243], 00:19:04.889 | 99.99th=[ 259] 00:19:04.889 bw ( KiB/s): min=104239, max=426496, per=22.13%, avg=319601.25, stdev=121915.10, samples=20 00:19:04.889 iops : min= 407, max= 1666, avg=1248.35, stdev=476.28, samples=20 00:19:04.889 lat (msec) : 4=0.05%, 10=0.15%, 20=0.49%, 50=83.16%, 100=6.27% 00:19:04.889 lat (msec) : 250=9.83%, 500=0.05% 00:19:04.889 cpu : usr=1.74%, sys=2.59%, ctx=15461, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,12550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job2: (groupid=0, jobs=1): err= 0: pid=86117: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=430, BW=108MiB/s (113MB/s)(1091MiB/10128msec); 0 zone resets 00:19:04.889 slat (usec): min=17, max=33205, avg=2271.26, stdev=4306.02 00:19:04.889 clat (msec): min=19, max=257, avg=146.16, stdev=43.18 00:19:04.889 lat (msec): min=19, max=258, avg=148.43, stdev=43.62 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 75], 20.00th=[ 108], 00:19:04.889 | 30.00th=[ 114], 40.00th=[ 138], 50.00th=[ 161], 60.00th=[ 169], 00:19:04.889 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 199], 00:19:04.889 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 251], 99.95th=[ 251], 00:19:04.889 | 99.99th=[ 259] 00:19:04.889 bw ( KiB/s): min=77824, max=218624, per=7.62%, avg=110102.30, stdev=34167.40, samples=20 00:19:04.889 iops : min= 304, max= 854, avg=430.05, stdev=133.48, samples=20 00:19:04.889 lat (msec) : 20=0.09%, 50=0.37%, 100=14.87%, 250=84.56%, 500=0.11% 00:19:04.889 cpu : usr=0.81%, sys=1.06%, ctx=5311, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,4365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job3: (groupid=0, jobs=1): err= 0: pid=86121: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=411, BW=103MiB/s (108MB/s)(1042MiB/10135msec); 0 zone resets 00:19:04.889 slat (usec): min=18, max=86067, avg=2396.90, stdev=4823.23 00:19:04.889 clat (msec): min=9, max=263, avg=153.17, stdev=48.00 00:19:04.889 lat (msec): min=9, max=263, avg=155.57, stdev=48.49 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 46], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 108], 00:19:04.889 | 30.00th=[ 120], 40.00th=[ 157], 50.00th=[ 171], 60.00th=[ 180], 00:19:04.889 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 199], 95.00th=[ 220], 00:19:04.889 | 99.00th=[ 243], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:19:04.889 | 99.99th=[ 264] 00:19:04.889 bw ( KiB/s): min=77824, max=219648, per=7.27%, avg=105012.25, stdev=34156.47, samples=20 00:19:04.889 iops : min= 304, max= 858, avg=410.10, stdev=133.39, samples=20 00:19:04.889 lat (msec) : 10=0.02%, 20=0.34%, 50=0.91%, 100=13.83%, 250=84.33% 00:19:04.889 lat (msec) : 500=0.58% 00:19:04.889 cpu : usr=0.95%, sys=1.20%, ctx=3159, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,4166,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job4: (groupid=0, jobs=1): err= 0: pid=86122: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=550, BW=138MiB/s (144MB/s)(1389MiB/10096msec); 0 zone resets 00:19:04.889 slat (usec): min=19, max=19327, avg=1752.16, stdev=3108.24 00:19:04.889 clat (msec): min=11, max=226, avg=114.52, stdev=18.47 00:19:04.889 lat (msec): min=13, max=226, avg=116.28, stdev=18.52 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 40], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 109], 00:19:04.889 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 115], 00:19:04.889 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 138], 00:19:04.889 | 99.00th=[ 188], 99.50th=[ 213], 99.90th=[ 226], 99.95th=[ 226], 00:19:04.889 | 99.99th=[ 226] 00:19:04.889 bw ( KiB/s): min=110882, max=161280, per=9.73%, avg=140555.00, stdev=10072.97, samples=20 00:19:04.889 iops : min= 433, max= 630, avg=548.95, stdev=39.33, samples=20 00:19:04.889 lat (msec) : 20=0.11%, 50=1.39%, 100=2.65%, 250=95.86% 00:19:04.889 cpu : usr=1.15%, sys=1.37%, ctx=7089, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,5555,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job5: (groupid=0, jobs=1): err= 0: pid=86124: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=363, BW=90.8MiB/s (95.2MB/s)(921MiB/10142msec); 0 zone resets 00:19:04.889 slat (usec): min=18, max=88003, avg=2594.83, stdev=6794.20 00:19:04.889 clat (msec): min=19, max=661, avg=173.47, stdev=105.34 00:19:04.889 lat (msec): min=24, max=661, avg=176.07, stdev=106.59 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 70], 5.00th=[ 71], 10.00th=[ 74], 20.00th=[ 78], 00:19:04.889 | 30.00th=[ 114], 40.00th=[ 136], 50.00th=[ 146], 60.00th=[ 165], 00:19:04.889 | 70.00th=[ 186], 80.00th=[ 230], 90.00th=[ 330], 95.00th=[ 405], 00:19:04.889 | 99.00th=[ 531], 99.50th=[ 592], 99.90th=[ 642], 99.95th=[ 659], 00:19:04.889 | 99.99th=[ 659] 00:19:04.889 bw ( KiB/s): min=34816, max=218624, per=6.42%, avg=92737.35, stdev=51356.70, samples=20 00:19:04.889 iops : min= 136, max= 854, avg=362.15, stdev=200.63, samples=20 00:19:04.889 lat (msec) : 20=0.03%, 50=0.33%, 100=25.51%, 250=57.75%, 500=14.87% 00:19:04.889 lat (msec) : 750=1.52% 00:19:04.889 cpu : usr=0.73%, sys=0.95%, ctx=4766, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,3685,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job6: (groupid=0, jobs=1): err= 0: pid=86125: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=328, BW=82.2MiB/s (86.2MB/s)(835MiB/10154msec); 0 zone resets 00:19:04.889 slat (usec): min=21, max=89345, avg=2948.70, stdev=6448.13 00:19:04.889 clat (msec): min=16, max=662, avg=191.64, stdev=84.09 00:19:04.889 lat (msec): min=16, max=662, avg=194.59, stdev=85.07 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 55], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 157], 00:19:04.889 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 171], 60.00th=[ 176], 00:19:04.889 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 249], 95.00th=[ 401], 00:19:04.889 | 99.00th=[ 535], 99.50th=[ 617], 99.90th=[ 642], 99.95th=[ 659], 00:19:04.889 | 99.99th=[ 659] 00:19:04.889 bw ( KiB/s): min=34816, max=117248, per=5.80%, avg=83839.50, stdev=24692.02, samples=20 00:19:04.889 iops : min= 136, max= 458, avg=327.45, stdev=96.43, samples=20 00:19:04.889 lat (msec) : 20=0.12%, 50=0.84%, 100=0.84%, 250=88.23%, 500=8.33% 00:19:04.889 lat (msec) : 750=1.65% 00:19:04.889 cpu : usr=0.82%, sys=0.69%, ctx=2866, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,3338,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job7: (groupid=0, jobs=1): err= 0: pid=86126: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=553, BW=138MiB/s (145MB/s)(1400MiB/10116msec); 0 zone resets 00:19:04.889 slat (usec): min=15, max=90598, avg=1752.49, stdev=4137.09 00:19:04.889 clat (msec): min=8, max=255, avg=113.81, stdev=68.53 00:19:04.889 lat (msec): min=9, max=255, avg=115.56, stdev=69.49 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 36], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 41], 00:19:04.889 | 30.00th=[ 45], 40.00th=[ 62], 50.00th=[ 84], 60.00th=[ 165], 00:19:04.889 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 197], 95.00th=[ 207], 00:19:04.889 | 99.00th=[ 232], 99.50th=[ 239], 99.90th=[ 247], 99.95th=[ 249], 00:19:04.889 | 99.99th=[ 255] 00:19:04.889 bw ( KiB/s): min=73728, max=398336, per=9.81%, avg=141738.00, stdev=107147.11, samples=20 00:19:04.889 iops : min= 288, max= 1556, avg=553.65, stdev=418.55, samples=20 00:19:04.889 lat (msec) : 10=0.04%, 20=0.05%, 50=36.79%, 100=14.02%, 250=49.07% 00:19:04.889 lat (msec) : 500=0.04% 00:19:04.889 cpu : usr=1.13%, sys=1.52%, ctx=5561, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,5600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job8: (groupid=0, jobs=1): err= 0: pid=86127: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=554, BW=139MiB/s (145MB/s)(1399MiB/10095msec); 0 zone resets 00:19:04.889 slat (usec): min=19, max=17484, avg=1746.93, stdev=3087.43 00:19:04.889 clat (msec): min=13, max=226, avg=113.67, stdev=19.33 00:19:04.889 lat (msec): min=15, max=226, avg=115.41, stdev=19.40 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 38], 5.00th=[ 82], 10.00th=[ 105], 20.00th=[ 108], 00:19:04.889 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 115], 00:19:04.889 | 70.00th=[ 117], 80.00th=[ 121], 90.00th=[ 130], 95.00th=[ 138], 00:19:04.889 | 99.00th=[ 190], 99.50th=[ 209], 99.90th=[ 226], 99.95th=[ 228], 00:19:04.889 | 99.99th=[ 228] 00:19:04.889 bw ( KiB/s): min=110371, max=186368, per=9.80%, avg=141604.65, stdev=13642.11, samples=20 00:19:04.889 iops : min= 431, max= 728, avg=553.05, stdev=53.28, samples=20 00:19:04.889 lat (msec) : 20=0.20%, 50=1.45%, 100=4.56%, 250=93.80% 00:19:04.889 cpu : usr=0.99%, sys=1.36%, ctx=7572, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,5596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job9: (groupid=0, jobs=1): err= 0: pid=86128: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=555, BW=139MiB/s (146MB/s)(1401MiB/10093msec); 0 zone resets 00:19:04.889 slat (usec): min=21, max=17914, avg=1743.63, stdev=3072.75 00:19:04.889 clat (msec): min=6, max=226, avg=113.48, stdev=19.06 00:19:04.889 lat (msec): min=6, max=226, avg=115.22, stdev=19.15 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 43], 5.00th=[ 85], 10.00th=[ 105], 20.00th=[ 108], 00:19:04.889 | 30.00th=[ 111], 40.00th=[ 112], 50.00th=[ 113], 60.00th=[ 114], 00:19:04.889 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 128], 95.00th=[ 138], 00:19:04.889 | 99.00th=[ 186], 99.50th=[ 213], 99.90th=[ 226], 99.95th=[ 226], 00:19:04.889 | 99.99th=[ 226] 00:19:04.889 bw ( KiB/s): min=110592, max=167936, per=9.82%, avg=141850.95, stdev=12235.03, samples=20 00:19:04.889 iops : min= 432, max= 656, avg=554.05, stdev=47.77, samples=20 00:19:04.889 lat (msec) : 10=0.04%, 20=0.30%, 50=0.98%, 100=5.71%, 250=92.97% 00:19:04.889 cpu : usr=1.05%, sys=1.72%, ctx=7126, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,5604,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 job10: (groupid=0, jobs=1): err= 0: pid=86129: Mon May 13 18:32:19 2024 00:19:04.889 write: IOPS=318, BW=79.5MiB/s (83.4MB/s)(807MiB/10146msec); 0 zone resets 00:19:04.889 slat (usec): min=19, max=88887, avg=3051.91, stdev=6747.27 00:19:04.889 clat (msec): min=21, max=662, avg=198.09, stdev=82.93 00:19:04.889 lat (msec): min=21, max=662, avg=201.14, stdev=83.86 00:19:04.889 clat percentiles (msec): 00:19:04.889 | 1.00th=[ 126], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 161], 00:19:04.889 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:19:04.889 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 253], 95.00th=[ 414], 00:19:04.889 | 99.00th=[ 542], 99.50th=[ 609], 99.90th=[ 651], 99.95th=[ 659], 00:19:04.889 | 99.99th=[ 659] 00:19:04.889 bw ( KiB/s): min=34816, max=117248, per=5.61%, avg=80989.95, stdev=23662.70, samples=20 00:19:04.889 iops : min= 136, max= 458, avg=316.35, stdev=92.43, samples=20 00:19:04.889 lat (msec) : 50=0.12%, 100=0.50%, 250=89.34%, 500=8.34%, 750=1.70% 00:19:04.889 cpu : usr=0.65%, sys=0.94%, ctx=2925, majf=0, minf=1 00:19:04.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:04.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:04.889 issued rwts: total=0,3227,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.889 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:04.889 00:19:04.889 Run status group 0 (all jobs): 00:19:04.889 WRITE: bw=1411MiB/s (1479MB/s), 79.5MiB/s-310MiB/s (83.4MB/s-325MB/s), io=14.0GiB (15.0GB), run=10093-10154msec 00:19:04.889 00:19:04.889 Disk stats (read/write): 00:19:04.889 nvme0n1: ios=49/7084, merge=0/0, ticks=34/1216904, in_queue=1216938, util=98.01% 00:19:04.889 nvme10n1: ios=49/24989, merge=0/0, ticks=72/1217165, in_queue=1217237, util=98.52% 00:19:04.889 nvme1n1: ios=32/8613, merge=0/0, ticks=22/1215590, in_queue=1215612, util=98.24% 00:19:04.889 nvme2n1: ios=5/8216, merge=0/0, ticks=5/1213849, in_queue=1213854, util=98.27% 00:19:04.889 nvme3n1: ios=29/11006, merge=0/0, ticks=26/1219806, in_queue=1219832, util=98.40% 00:19:04.889 nvme4n1: ios=0/7254, merge=0/0, ticks=0/1218350, in_queue=1218350, util=98.41% 00:19:04.889 nvme5n1: ios=0/6552, merge=0/0, ticks=0/1215371, in_queue=1215371, util=98.46% 00:19:04.889 nvme6n1: ios=21/11071, merge=0/0, ticks=14/1213452, in_queue=1213466, util=98.46% 00:19:04.889 nvme7n1: ios=0/11083, merge=0/0, ticks=0/1219089, in_queue=1219089, util=98.84% 00:19:04.889 nvme8n1: ios=0/11095, merge=0/0, ticks=0/1218034, in_queue=1218034, util=98.90% 00:19:04.889 nvme9n1: ios=0/6323, merge=0/0, ticks=0/1213774, in_queue=1213774, util=98.82% 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:04.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.889 18:32:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK1 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:04.889 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK2 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:04.889 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.889 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK3 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:04.890 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK4 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:04.890 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK5 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:04.890 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK6 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:04.890 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK7 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:04.890 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK8 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:04.890 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK9 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.890 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:05.148 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK10 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:05.148 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1215 -- # local i=0 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # grep -q -w SPDK11 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # return 0 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:05.148 18:32:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:05.148 rmmod nvme_tcp 00:19:05.148 rmmod nvme_fabrics 00:19:05.148 rmmod nvme_keyring 00:19:05.148 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:05.148 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:19:05.148 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 85428 ']' 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 85428 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@946 -- # '[' -z 85428 ']' 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@950 -- # kill -0 85428 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # uname 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85428 00:19:05.149 killing process with pid 85428 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85428' 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@965 -- # kill 85428 00:19:05.149 [2024-05-13 18:32:21.031793] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:05.149 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@970 -- # wait 85428 00:19:05.713 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:05.713 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:05.713 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:05.714 ************************************ 00:19:05.714 END TEST nvmf_multiconnection 00:19:05.714 ************************************ 00:19:05.714 00:19:05.714 real 0m49.503s 00:19:05.714 user 2m41.776s 00:19:05.714 sys 0m24.537s 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:05.714 18:32:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 18:32:21 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:05.714 18:32:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:05.714 18:32:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:05.714 18:32:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:05.714 ************************************ 00:19:05.714 START TEST nvmf_initiator_timeout 00:19:05.714 ************************************ 00:19:05.714 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:05.973 * Looking for test storage... 00:19:05.973 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:05.973 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:05.974 Cannot find device "nvmf_tgt_br" 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # true 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:05.974 Cannot find device "nvmf_tgt_br2" 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # true 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:05.974 Cannot find device "nvmf_tgt_br" 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # true 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:05.974 Cannot find device "nvmf_tgt_br2" 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # true 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:05.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:05.974 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:05.974 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:06.232 18:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:06.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:19:06.233 00:19:06.233 --- 10.0.0.2 ping statistics --- 00:19:06.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.233 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:06.233 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.233 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:19:06.233 00:19:06.233 --- 10.0.0.3 ping statistics --- 00:19:06.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.233 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:06.233 00:19:06.233 --- 10.0.0.1 ping statistics --- 00:19:06.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.233 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@433 -- # return 0 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:06.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=86495 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 86495 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@827 -- # '[' -z 86495 ']' 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:06.233 18:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:06.233 [2024-05-13 18:32:22.132418] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:19:06.233 [2024-05-13 18:32:22.132497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.490 [2024-05-13 18:32:22.272077] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:06.490 [2024-05-13 18:32:22.397526] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.490 [2024-05-13 18:32:22.397782] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.490 [2024-05-13 18:32:22.397918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.490 [2024-05-13 18:32:22.397974] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.490 [2024-05-13 18:32:22.398073] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.490 [2024-05-13 18:32:22.398268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.490 [2024-05-13 18:32:22.398824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.490 [2024-05-13 18:32:22.398904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:06.490 [2024-05-13 18:32:22.398907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # return 0 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.426 Malloc0 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.426 Delay0 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.426 [2024-05-13 18:32:23.274032] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:07.426 [2024-05-13 18:32:23.305986] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:07.426 [2024-05-13 18:32:23.306349] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.426 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:07.685 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:07.685 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1194 -- # local i=0 00:19:07.685 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.685 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:19:07.685 18:32:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1201 -- # sleep 2 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # return 0 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=86583 00:19:09.584 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:09.585 18:32:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:09.585 [global] 00:19:09.585 thread=1 00:19:09.585 invalidate=1 00:19:09.585 rw=write 00:19:09.585 time_based=1 00:19:09.585 runtime=60 00:19:09.585 ioengine=libaio 00:19:09.585 direct=1 00:19:09.585 bs=4096 00:19:09.585 iodepth=1 00:19:09.585 norandommap=0 00:19:09.585 numjobs=1 00:19:09.585 00:19:09.585 verify_dump=1 00:19:09.585 verify_backlog=512 00:19:09.585 verify_state_save=0 00:19:09.585 do_verify=1 00:19:09.585 verify=crc32c-intel 00:19:09.585 [job0] 00:19:09.585 filename=/dev/nvme0n1 00:19:09.843 Could not set queue depth (nvme0n1) 00:19:09.843 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:09.843 fio-3.35 00:19:09.843 Starting 1 thread 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:13.124 true 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:13.124 true 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.124 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:13.125 true 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:13.125 true 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.125 18:32:28 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:15.656 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:15.656 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.656 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:15.656 true 00:19:15.656 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.656 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:15.657 true 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:15.657 true 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:15.657 true 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:15.657 18:32:31 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 86583 00:20:11.873 00:20:11.873 job0: (groupid=0, jobs=1): err= 0: pid=86604: Mon May 13 18:33:25 2024 00:20:11.873 read: IOPS=870, BW=3482KiB/s (3565kB/s)(204MiB/60000msec) 00:20:11.873 slat (usec): min=12, max=9063, avg=16.06, stdev=52.38 00:20:11.873 clat (usec): min=3, max=40594k, avg=962.21, stdev=177634.40 00:20:11.873 lat (usec): min=177, max=40594k, avg=978.27, stdev=177634.40 00:20:11.873 clat percentiles (usec): 00:20:11.873 | 1.00th=[ 169], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 176], 00:20:11.873 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 186], 00:20:11.873 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 204], 00:20:11.873 | 99.00th=[ 221], 99.50th=[ 233], 99.90th=[ 322], 99.95th=[ 416], 00:20:11.873 | 99.99th=[ 1029] 00:20:11.873 write: IOPS=871, BW=3486KiB/s (3569kB/s)(204MiB/60000msec); 0 zone resets 00:20:11.873 slat (usec): min=19, max=632, avg=22.81, stdev= 6.12 00:20:11.873 clat (usec): min=70, max=4544, avg=144.17, stdev=40.34 00:20:11.873 lat (usec): min=145, max=4591, avg=166.98, stdev=41.15 00:20:11.873 clat percentiles (usec): 00:20:11.873 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:20:11.873 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 143], 60.00th=[ 145], 00:20:11.873 | 70.00th=[ 147], 80.00th=[ 151], 90.00th=[ 155], 95.00th=[ 159], 00:20:11.873 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 330], 99.95th=[ 494], 00:20:11.873 | 99.99th=[ 2212] 00:20:11.873 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=10501.92, stdev=1970.06, samples=39 00:20:11.874 iops : min= 1024, max= 3072, avg=2625.46, stdev=492.50, samples=39 00:20:11.874 lat (usec) : 4=0.01%, 100=0.01%, 250=99.78%, 500=0.18%, 750=0.02% 00:20:11.874 lat (usec) : 1000=0.01% 00:20:11.874 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:20:11.874 cpu : usr=0.61%, sys=2.52%, ctx=104582, majf=0, minf=2 00:20:11.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.874 issued rwts: total=52224,52284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:11.874 00:20:11.874 Run status group 0 (all jobs): 00:20:11.874 READ: bw=3482KiB/s (3565kB/s), 3482KiB/s-3482KiB/s (3565kB/s-3565kB/s), io=204MiB (214MB), run=60000-60000msec 00:20:11.874 WRITE: bw=3486KiB/s (3569kB/s), 3486KiB/s-3486KiB/s (3569kB/s-3569kB/s), io=204MiB (214MB), run=60000-60000msec 00:20:11.874 00:20:11.874 Disk stats (read/write): 00:20:11.874 nvme0n1: ios=52159/52224, merge=0/0, ticks=9944/8080, in_queue=18024, util=99.63% 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:11.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1215 -- # local i=0 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.874 nvmf hotplug test: fio successful as expected 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # return 0 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:11.874 18:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:11.874 rmmod nvme_tcp 00:20:11.874 rmmod nvme_fabrics 00:20:11.874 rmmod nvme_keyring 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 86495 ']' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 86495 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@946 -- # '[' -z 86495 ']' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # kill -0 86495 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # uname 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86495 00:20:11.874 killing process with pid 86495 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86495' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@965 -- # kill 86495 00:20:11.874 [2024-05-13 18:33:26.040105] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@970 -- # wait 86495 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:11.874 ************************************ 00:20:11.874 END TEST nvmf_initiator_timeout 00:20:11.874 ************************************ 00:20:11.874 00:20:11.874 real 1m4.714s 00:20:11.874 user 4m1.816s 00:20:11.874 sys 0m11.520s 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:11.874 18:33:26 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:11.874 18:33:26 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:20:11.874 18:33:26 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:11.874 18:33:26 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:11.874 18:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.874 18:33:26 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:11.874 18:33:26 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:11.874 18:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.874 18:33:26 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:11.874 18:33:26 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:11.874 18:33:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:11.874 18:33:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:11.874 18:33:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:11.874 ************************************ 00:20:11.874 START TEST nvmf_multicontroller 00:20:11.874 ************************************ 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:11.874 * Looking for test storage... 00:20:11.874 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.874 18:33:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:11.875 Cannot find device "nvmf_tgt_br" 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@155 -- # true 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:11.875 Cannot find device "nvmf_tgt_br2" 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@156 -- # true 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:11.875 Cannot find device "nvmf_tgt_br" 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@158 -- # true 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:11.875 Cannot find device "nvmf_tgt_br2" 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@159 -- # true 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:11.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:11.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:11.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:11.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:20:11.875 00:20:11.875 --- 10.0.0.2 ping statistics --- 00:20:11.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.875 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:11.875 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:11.875 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:11.875 00:20:11.875 --- 10.0.0.3 ping statistics --- 00:20:11.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.875 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:11.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:11.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:11.875 00:20:11.875 --- 10.0.0.1 ping statistics --- 00:20:11.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:11.875 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@433 -- # return 0 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:11.875 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=87445 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 87445 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 87445 ']' 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:11.876 18:33:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:11.876 [2024-05-13 18:33:27.008628] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:11.876 [2024-05-13 18:33:27.008731] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:11.876 [2024-05-13 18:33:27.144368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:11.876 [2024-05-13 18:33:27.266372] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:11.876 [2024-05-13 18:33:27.266686] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:11.876 [2024-05-13 18:33:27.266866] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:11.876 [2024-05-13 18:33:27.267056] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:11.876 [2024-05-13 18:33:27.267122] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:11.876 [2024-05-13 18:33:27.267556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:11.876 [2024-05-13 18:33:27.267706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:11.876 [2024-05-13 18:33:27.267712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.142 18:33:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:12.142 18:33:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:12.142 18:33:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.142 18:33:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.142 18:33:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.142 [2024-05-13 18:33:28.024764] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.142 Malloc0 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.142 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 [2024-05-13 18:33:28.090019] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:12.425 [2024-05-13 18:33:28.090422] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 [2024-05-13 18:33:28.098217] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 Malloc1 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=87497 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 87497 /var/tmp/bdevperf.sock 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 87497 ']' 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.425 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:12.426 18:33:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.803 NVMe0n1 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.803 1 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.803 2024/05/13 18:33:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:13.803 request: 00:20:13.803 { 00:20:13.803 "method": "bdev_nvme_attach_controller", 00:20:13.803 "params": { 00:20:13.803 "name": "NVMe0", 00:20:13.803 "trtype": "tcp", 00:20:13.803 "traddr": "10.0.0.2", 00:20:13.803 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:13.803 "hostaddr": "10.0.0.2", 00:20:13.803 "hostsvcid": "60000", 00:20:13.803 "adrfam": "ipv4", 00:20:13.803 "trsvcid": "4420", 00:20:13.803 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:20:13.803 } 00:20:13.803 } 00:20:13.803 Got JSON-RPC error response 00:20:13.803 GoRPCClient: error on JSON-RPC call 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.803 2024/05/13 18:33:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:13.803 request: 00:20:13.803 { 00:20:13.803 "method": "bdev_nvme_attach_controller", 00:20:13.803 "params": { 00:20:13.803 "name": "NVMe0", 00:20:13.803 "trtype": "tcp", 00:20:13.803 "traddr": "10.0.0.2", 00:20:13.803 "hostaddr": "10.0.0.2", 00:20:13.803 "hostsvcid": "60000", 00:20:13.803 "adrfam": "ipv4", 00:20:13.803 "trsvcid": "4420", 00:20:13.803 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:20:13.803 } 00:20:13.803 } 00:20:13.803 Got JSON-RPC error response 00:20:13.803 GoRPCClient: error on JSON-RPC call 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.803 2024/05/13 18:33:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:20:13.803 request: 00:20:13.803 { 00:20:13.803 "method": "bdev_nvme_attach_controller", 00:20:13.803 "params": { 00:20:13.803 "name": "NVMe0", 00:20:13.803 "trtype": "tcp", 00:20:13.803 "traddr": "10.0.0.2", 00:20:13.803 "hostaddr": "10.0.0.2", 00:20:13.803 "hostsvcid": "60000", 00:20:13.803 "adrfam": "ipv4", 00:20:13.803 "trsvcid": "4420", 00:20:13.803 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.803 "multipath": "disable" 00:20:13.803 } 00:20:13.803 } 00:20:13.803 Got JSON-RPC error response 00:20:13.803 GoRPCClient: error on JSON-RPC call 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.803 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.804 2024/05/13 18:33:29 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:20:13.804 request: 00:20:13.804 { 00:20:13.804 "method": "bdev_nvme_attach_controller", 00:20:13.804 "params": { 00:20:13.804 "name": "NVMe0", 00:20:13.804 "trtype": "tcp", 00:20:13.804 "traddr": "10.0.0.2", 00:20:13.804 "hostaddr": "10.0.0.2", 00:20:13.804 "hostsvcid": "60000", 00:20:13.804 "adrfam": "ipv4", 00:20:13.804 "trsvcid": "4420", 00:20:13.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.804 "multipath": "failover" 00:20:13.804 } 00:20:13.804 } 00:20:13.804 Got JSON-RPC error response 00:20:13.804 GoRPCClient: error on JSON-RPC call 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.804 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.804 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:13.804 18:33:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:15.178 0 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 87497 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 87497 ']' 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 87497 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87497 00:20:15.178 killing process with pid 87497 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87497' 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 87497 00:20:15.178 18:33:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 87497 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:20:15.178 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:15.178 [2024-05-13 18:33:28.220442] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:15.178 [2024-05-13 18:33:28.220622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87497 ] 00:20:15.178 [2024-05-13 18:33:28.361247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.178 [2024-05-13 18:33:28.484400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:15.178 [2024-05-13 18:33:29.637374] bdev.c:4555:bdev_name_add: *ERROR*: Bdev name 2dd5bd7f-189a-4cf4-b98b-f32904a3ae0f already exists 00:20:15.178 [2024-05-13 18:33:29.637475] bdev.c:7672:bdev_register: *ERROR*: Unable to add uuid:2dd5bd7f-189a-4cf4-b98b-f32904a3ae0f alias for bdev NVMe1n1 00:20:15.178 [2024-05-13 18:33:29.637512] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:15.178 Running I/O for 1 seconds... 00:20:15.178 00:20:15.178 Latency(us) 00:20:15.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.178 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:15.178 NVMe0n1 : 1.00 19580.53 76.49 0.00 0.00 6525.59 3381.06 11677.32 00:20:15.178 =================================================================================================================== 00:20:15.178 Total : 19580.53 76.49 0.00 0.00 6525.59 3381.06 11677.32 00:20:15.178 Received shutdown signal, test time was about 1.000000 seconds 00:20:15.178 00:20:15.178 Latency(us) 00:20:15.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:15.178 =================================================================================================================== 00:20:15.178 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:15.178 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:20:15.178 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.437 rmmod nvme_tcp 00:20:15.437 rmmod nvme_fabrics 00:20:15.437 rmmod nvme_keyring 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 87445 ']' 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 87445 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 87445 ']' 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 87445 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87445 00:20:15.437 killing process with pid 87445 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87445' 00:20:15.437 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 87445 00:20:15.438 [2024-05-13 18:33:31.271021] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:15.438 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 87445 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:15.696 00:20:15.696 real 0m5.154s 00:20:15.696 user 0m16.234s 00:20:15.696 sys 0m1.130s 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:15.696 ************************************ 00:20:15.696 END TEST nvmf_multicontroller 00:20:15.696 18:33:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:15.696 ************************************ 00:20:15.963 18:33:31 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:15.963 18:33:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:15.963 18:33:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:15.963 18:33:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:15.963 ************************************ 00:20:15.963 START TEST nvmf_aer 00:20:15.963 ************************************ 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:15.963 * Looking for test storage... 00:20:15.963 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.963 18:33:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:15.964 Cannot find device "nvmf_tgt_br" 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@155 -- # true 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:15.964 Cannot find device "nvmf_tgt_br2" 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@156 -- # true 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:15.964 Cannot find device "nvmf_tgt_br" 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@158 -- # true 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:15.964 Cannot find device "nvmf_tgt_br2" 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@159 -- # true 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:15.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@162 -- # true 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:15.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@163 -- # true 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:15.964 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:16.236 18:33:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:16.236 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:16.236 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:16.236 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:16.236 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:16.236 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:16.236 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:16.236 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:16.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:20:16.237 00:20:16.237 --- 10.0.0.2 ping statistics --- 00:20:16.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.237 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:16.237 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:16.237 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:16.237 00:20:16.237 --- 10.0.0.3 ping statistics --- 00:20:16.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.237 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:16.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:20:16.237 00:20:16.237 --- 10.0.0.1 ping statistics --- 00:20:16.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.237 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@433 -- # return 0 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=87754 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 87754 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 87754 ']' 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:16.237 18:33:32 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:16.237 [2024-05-13 18:33:32.172777] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:16.237 [2024-05-13 18:33:32.173457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.495 [2024-05-13 18:33:32.311427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:16.495 [2024-05-13 18:33:32.432618] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.495 [2024-05-13 18:33:32.432682] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.495 [2024-05-13 18:33:32.432695] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.495 [2024-05-13 18:33:32.432704] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.495 [2024-05-13 18:33:32.432711] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.495 [2024-05-13 18:33:32.432829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:16.495 [2024-05-13 18:33:32.434379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.495 [2024-05-13 18:33:32.434517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:16.495 [2024-05-13 18:33:32.434524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 [2024-05-13 18:33:33.183724] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 Malloc0 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 [2024-05-13 18:33:33.242357] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:17.430 [2024-05-13 18:33:33.242768] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.430 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.430 [ 00:20:17.430 { 00:20:17.430 "allow_any_host": true, 00:20:17.430 "hosts": [], 00:20:17.430 "listen_addresses": [], 00:20:17.430 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.430 "subtype": "Discovery" 00:20:17.430 }, 00:20:17.430 { 00:20:17.430 "allow_any_host": true, 00:20:17.430 "hosts": [], 00:20:17.430 "listen_addresses": [ 00:20:17.430 { 00:20:17.430 "adrfam": "IPv4", 00:20:17.430 "traddr": "10.0.0.2", 00:20:17.430 "trsvcid": "4420", 00:20:17.430 "trtype": "TCP" 00:20:17.430 } 00:20:17.430 ], 00:20:17.430 "max_cntlid": 65519, 00:20:17.430 "max_namespaces": 2, 00:20:17.430 "min_cntlid": 1, 00:20:17.430 "model_number": "SPDK bdev Controller", 00:20:17.430 "namespaces": [ 00:20:17.430 { 00:20:17.430 "bdev_name": "Malloc0", 00:20:17.431 "name": "Malloc0", 00:20:17.431 "nguid": "DF3196AC0E934B0583ADD4AEFBD6192F", 00:20:17.431 "nsid": 1, 00:20:17.431 "uuid": "df3196ac-0e93-4b05-83ad-d4aefbd6192f" 00:20:17.431 } 00:20:17.431 ], 00:20:17.431 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.431 "serial_number": "SPDK00000000000001", 00:20:17.431 "subtype": "NVMe" 00:20:17.431 } 00:20:17.431 ] 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=87809 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:20:17.431 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.690 Malloc1 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.690 Asynchronous Event Request test 00:20:17.690 Attaching to 10.0.0.2 00:20:17.690 Attached to 10.0.0.2 00:20:17.690 Registering asynchronous event callbacks... 00:20:17.690 Starting namespace attribute notice tests for all controllers... 00:20:17.690 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:17.690 aer_cb - Changed Namespace 00:20:17.690 Cleaning up... 00:20:17.690 [ 00:20:17.690 { 00:20:17.690 "allow_any_host": true, 00:20:17.690 "hosts": [], 00:20:17.690 "listen_addresses": [], 00:20:17.690 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.690 "subtype": "Discovery" 00:20:17.690 }, 00:20:17.690 { 00:20:17.690 "allow_any_host": true, 00:20:17.690 "hosts": [], 00:20:17.690 "listen_addresses": [ 00:20:17.690 { 00:20:17.690 "adrfam": "IPv4", 00:20:17.690 "traddr": "10.0.0.2", 00:20:17.690 "trsvcid": "4420", 00:20:17.690 "trtype": "TCP" 00:20:17.690 } 00:20:17.690 ], 00:20:17.690 "max_cntlid": 65519, 00:20:17.690 "max_namespaces": 2, 00:20:17.690 "min_cntlid": 1, 00:20:17.690 "model_number": "SPDK bdev Controller", 00:20:17.690 "namespaces": [ 00:20:17.690 { 00:20:17.690 "bdev_name": "Malloc0", 00:20:17.690 "name": "Malloc0", 00:20:17.690 "nguid": "DF3196AC0E934B0583ADD4AEFBD6192F", 00:20:17.690 "nsid": 1, 00:20:17.690 "uuid": "df3196ac-0e93-4b05-83ad-d4aefbd6192f" 00:20:17.690 }, 00:20:17.690 { 00:20:17.690 "bdev_name": "Malloc1", 00:20:17.690 "name": "Malloc1", 00:20:17.690 "nguid": "B6DE45EFFFFA41AF84242E3F36FE3F2F", 00:20:17.690 "nsid": 2, 00:20:17.690 "uuid": "b6de45ef-fffa-41af-8424-2e3f36fe3f2f" 00:20:17.690 } 00:20:17.690 ], 00:20:17.690 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.690 "serial_number": "SPDK00000000000001", 00:20:17.690 "subtype": "NVMe" 00:20:17.690 } 00:20:17.690 ] 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 87809 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:17.690 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:17.950 rmmod nvme_tcp 00:20:17.950 rmmod nvme_fabrics 00:20:17.950 rmmod nvme_keyring 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 87754 ']' 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 87754 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 87754 ']' 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 87754 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87754 00:20:17.950 killing process with pid 87754 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87754' 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 87754 00:20:17.950 [2024-05-13 18:33:33.743927] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:17.950 18:33:33 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 87754 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:18.214 00:20:18.214 real 0m2.372s 00:20:18.214 user 0m6.382s 00:20:18.214 sys 0m0.633s 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:18.214 18:33:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:18.214 ************************************ 00:20:18.214 END TEST nvmf_aer 00:20:18.214 ************************************ 00:20:18.215 18:33:34 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.215 18:33:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:18.215 18:33:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:18.215 18:33:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:18.215 ************************************ 00:20:18.215 START TEST nvmf_async_init 00:20:18.215 ************************************ 00:20:18.215 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.472 * Looking for test storage... 00:20:18.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.472 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=4deb8bcc7a404724b84c6cfea48227a3 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:18.473 Cannot find device "nvmf_tgt_br" 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@155 -- # true 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:18.473 Cannot find device "nvmf_tgt_br2" 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@156 -- # true 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:18.473 Cannot find device "nvmf_tgt_br" 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@158 -- # true 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:18.473 Cannot find device "nvmf_tgt_br2" 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@159 -- # true 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:18.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:18.473 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:18.473 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:18.732 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.732 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:20:18.732 00:20:18.732 --- 10.0.0.2 ping statistics --- 00:20:18.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.732 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:18.732 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:18.732 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:20:18.732 00:20:18.732 --- 10.0.0.3 ping statistics --- 00:20:18.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.732 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:18.732 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.732 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:18.732 00:20:18.732 --- 10.0.0.1 ping statistics --- 00:20:18.732 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.732 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@433 -- # return 0 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=87985 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 87985 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 87985 ']' 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:18.732 18:33:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:18.732 [2024-05-13 18:33:34.652919] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:18.732 [2024-05-13 18:33:34.653036] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.991 [2024-05-13 18:33:34.792387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.991 [2024-05-13 18:33:34.914661] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.991 [2024-05-13 18:33:34.914723] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.991 [2024-05-13 18:33:34.914735] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.991 [2024-05-13 18:33:34.914743] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.991 [2024-05-13 18:33:34.914751] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.991 [2024-05-13 18:33:34.914783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 [2024-05-13 18:33:35.630324] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 null0 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 4deb8bcc7a404724b84c6cfea48227a3 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:19.925 [2024-05-13 18:33:35.674264] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:19.925 [2024-05-13 18:33:35.674515] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.925 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.184 nvme0n1 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.184 [ 00:20:20.184 { 00:20:20.184 "aliases": [ 00:20:20.184 "4deb8bcc-7a40-4724-b84c-6cfea48227a3" 00:20:20.184 ], 00:20:20.184 "assigned_rate_limits": { 00:20:20.184 "r_mbytes_per_sec": 0, 00:20:20.184 "rw_ios_per_sec": 0, 00:20:20.184 "rw_mbytes_per_sec": 0, 00:20:20.184 "w_mbytes_per_sec": 0 00:20:20.184 }, 00:20:20.184 "block_size": 512, 00:20:20.184 "claimed": false, 00:20:20.184 "driver_specific": { 00:20:20.184 "mp_policy": "active_passive", 00:20:20.184 "nvme": [ 00:20:20.184 { 00:20:20.184 "ctrlr_data": { 00:20:20.184 "ana_reporting": false, 00:20:20.184 "cntlid": 1, 00:20:20.184 "firmware_revision": "24.05", 00:20:20.184 "model_number": "SPDK bdev Controller", 00:20:20.184 "multi_ctrlr": true, 00:20:20.184 "oacs": { 00:20:20.184 "firmware": 0, 00:20:20.184 "format": 0, 00:20:20.184 "ns_manage": 0, 00:20:20.184 "security": 0 00:20:20.184 }, 00:20:20.184 "serial_number": "00000000000000000000", 00:20:20.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.184 "vendor_id": "0x8086" 00:20:20.184 }, 00:20:20.184 "ns_data": { 00:20:20.184 "can_share": true, 00:20:20.184 "id": 1 00:20:20.184 }, 00:20:20.184 "trid": { 00:20:20.184 "adrfam": "IPv4", 00:20:20.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.184 "traddr": "10.0.0.2", 00:20:20.184 "trsvcid": "4420", 00:20:20.184 "trtype": "TCP" 00:20:20.184 }, 00:20:20.184 "vs": { 00:20:20.184 "nvme_version": "1.3" 00:20:20.184 } 00:20:20.184 } 00:20:20.184 ] 00:20:20.184 }, 00:20:20.184 "memory_domains": [ 00:20:20.184 { 00:20:20.184 "dma_device_id": "system", 00:20:20.184 "dma_device_type": 1 00:20:20.184 } 00:20:20.184 ], 00:20:20.184 "name": "nvme0n1", 00:20:20.184 "num_blocks": 2097152, 00:20:20.184 "product_name": "NVMe disk", 00:20:20.184 "supported_io_types": { 00:20:20.184 "abort": true, 00:20:20.184 "compare": true, 00:20:20.184 "compare_and_write": true, 00:20:20.184 "flush": true, 00:20:20.184 "nvme_admin": true, 00:20:20.184 "nvme_io": true, 00:20:20.184 "read": true, 00:20:20.184 "reset": true, 00:20:20.184 "unmap": false, 00:20:20.184 "write": true, 00:20:20.184 "write_zeroes": true 00:20:20.184 }, 00:20:20.184 "uuid": "4deb8bcc-7a40-4724-b84c-6cfea48227a3", 00:20:20.184 "zoned": false 00:20:20.184 } 00:20:20.184 ] 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.184 18:33:35 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.184 [2024-05-13 18:33:35.944463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:20.184 [2024-05-13 18:33:35.944594] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8b850 (9): Bad file descriptor 00:20:20.184 [2024-05-13 18:33:36.076827] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.184 [ 00:20:20.184 { 00:20:20.184 "aliases": [ 00:20:20.184 "4deb8bcc-7a40-4724-b84c-6cfea48227a3" 00:20:20.184 ], 00:20:20.184 "assigned_rate_limits": { 00:20:20.184 "r_mbytes_per_sec": 0, 00:20:20.184 "rw_ios_per_sec": 0, 00:20:20.184 "rw_mbytes_per_sec": 0, 00:20:20.184 "w_mbytes_per_sec": 0 00:20:20.184 }, 00:20:20.184 "block_size": 512, 00:20:20.184 "claimed": false, 00:20:20.184 "driver_specific": { 00:20:20.184 "mp_policy": "active_passive", 00:20:20.184 "nvme": [ 00:20:20.184 { 00:20:20.184 "ctrlr_data": { 00:20:20.184 "ana_reporting": false, 00:20:20.184 "cntlid": 2, 00:20:20.184 "firmware_revision": "24.05", 00:20:20.184 "model_number": "SPDK bdev Controller", 00:20:20.184 "multi_ctrlr": true, 00:20:20.184 "oacs": { 00:20:20.184 "firmware": 0, 00:20:20.184 "format": 0, 00:20:20.184 "ns_manage": 0, 00:20:20.184 "security": 0 00:20:20.184 }, 00:20:20.184 "serial_number": "00000000000000000000", 00:20:20.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.184 "vendor_id": "0x8086" 00:20:20.184 }, 00:20:20.184 "ns_data": { 00:20:20.184 "can_share": true, 00:20:20.184 "id": 1 00:20:20.184 }, 00:20:20.184 "trid": { 00:20:20.184 "adrfam": "IPv4", 00:20:20.184 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.184 "traddr": "10.0.0.2", 00:20:20.184 "trsvcid": "4420", 00:20:20.184 "trtype": "TCP" 00:20:20.184 }, 00:20:20.184 "vs": { 00:20:20.184 "nvme_version": "1.3" 00:20:20.184 } 00:20:20.184 } 00:20:20.184 ] 00:20:20.184 }, 00:20:20.184 "memory_domains": [ 00:20:20.184 { 00:20:20.184 "dma_device_id": "system", 00:20:20.184 "dma_device_type": 1 00:20:20.184 } 00:20:20.184 ], 00:20:20.184 "name": "nvme0n1", 00:20:20.184 "num_blocks": 2097152, 00:20:20.184 "product_name": "NVMe disk", 00:20:20.184 "supported_io_types": { 00:20:20.184 "abort": true, 00:20:20.184 "compare": true, 00:20:20.184 "compare_and_write": true, 00:20:20.184 "flush": true, 00:20:20.184 "nvme_admin": true, 00:20:20.184 "nvme_io": true, 00:20:20.184 "read": true, 00:20:20.184 "reset": true, 00:20:20.184 "unmap": false, 00:20:20.184 "write": true, 00:20:20.184 "write_zeroes": true 00:20:20.184 }, 00:20:20.184 "uuid": "4deb8bcc-7a40-4724-b84c-6cfea48227a3", 00:20:20.184 "zoned": false 00:20:20.184 } 00:20:20.184 ] 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QqWUpxSwd9 00:20:20.184 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:20.442 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QqWUpxSwd9 00:20:20.442 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:20.442 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.442 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.442 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.442 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.443 [2024-05-13 18:33:36.140647] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:20.443 [2024-05-13 18:33:36.140837] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QqWUpxSwd9 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.443 [2024-05-13 18:33:36.148636] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QqWUpxSwd9 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.443 [2024-05-13 18:33:36.156626] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.443 [2024-05-13 18:33:36.156701] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:20.443 nvme0n1 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.443 [ 00:20:20.443 { 00:20:20.443 "aliases": [ 00:20:20.443 "4deb8bcc-7a40-4724-b84c-6cfea48227a3" 00:20:20.443 ], 00:20:20.443 "assigned_rate_limits": { 00:20:20.443 "r_mbytes_per_sec": 0, 00:20:20.443 "rw_ios_per_sec": 0, 00:20:20.443 "rw_mbytes_per_sec": 0, 00:20:20.443 "w_mbytes_per_sec": 0 00:20:20.443 }, 00:20:20.443 "block_size": 512, 00:20:20.443 "claimed": false, 00:20:20.443 "driver_specific": { 00:20:20.443 "mp_policy": "active_passive", 00:20:20.443 "nvme": [ 00:20:20.443 { 00:20:20.443 "ctrlr_data": { 00:20:20.443 "ana_reporting": false, 00:20:20.443 "cntlid": 3, 00:20:20.443 "firmware_revision": "24.05", 00:20:20.443 "model_number": "SPDK bdev Controller", 00:20:20.443 "multi_ctrlr": true, 00:20:20.443 "oacs": { 00:20:20.443 "firmware": 0, 00:20:20.443 "format": 0, 00:20:20.443 "ns_manage": 0, 00:20:20.443 "security": 0 00:20:20.443 }, 00:20:20.443 "serial_number": "00000000000000000000", 00:20:20.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.443 "vendor_id": "0x8086" 00:20:20.443 }, 00:20:20.443 "ns_data": { 00:20:20.443 "can_share": true, 00:20:20.443 "id": 1 00:20:20.443 }, 00:20:20.443 "trid": { 00:20:20.443 "adrfam": "IPv4", 00:20:20.443 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:20.443 "traddr": "10.0.0.2", 00:20:20.443 "trsvcid": "4421", 00:20:20.443 "trtype": "TCP" 00:20:20.443 }, 00:20:20.443 "vs": { 00:20:20.443 "nvme_version": "1.3" 00:20:20.443 } 00:20:20.443 } 00:20:20.443 ] 00:20:20.443 }, 00:20:20.443 "memory_domains": [ 00:20:20.443 { 00:20:20.443 "dma_device_id": "system", 00:20:20.443 "dma_device_type": 1 00:20:20.443 } 00:20:20.443 ], 00:20:20.443 "name": "nvme0n1", 00:20:20.443 "num_blocks": 2097152, 00:20:20.443 "product_name": "NVMe disk", 00:20:20.443 "supported_io_types": { 00:20:20.443 "abort": true, 00:20:20.443 "compare": true, 00:20:20.443 "compare_and_write": true, 00:20:20.443 "flush": true, 00:20:20.443 "nvme_admin": true, 00:20:20.443 "nvme_io": true, 00:20:20.443 "read": true, 00:20:20.443 "reset": true, 00:20:20.443 "unmap": false, 00:20:20.443 "write": true, 00:20:20.443 "write_zeroes": true 00:20:20.443 }, 00:20:20.443 "uuid": "4deb8bcc-7a40-4724-b84c-6cfea48227a3", 00:20:20.443 "zoned": false 00:20:20.443 } 00:20:20.443 ] 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.QqWUpxSwd9 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.443 rmmod nvme_tcp 00:20:20.443 rmmod nvme_fabrics 00:20:20.443 rmmod nvme_keyring 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 87985 ']' 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 87985 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 87985 ']' 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 87985 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87985 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:20.443 killing process with pid 87985 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87985' 00:20:20.443 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 87985 00:20:20.702 [2024-05-13 18:33:36.386183] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:20.702 [2024-05-13 18:33:36.386225] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:20.702 [2024-05-13 18:33:36.386237] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 87985 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:20.702 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:20.962 18:33:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:20.962 00:20:20.962 real 0m2.567s 00:20:20.962 user 0m2.353s 00:20:20.962 sys 0m0.578s 00:20:20.962 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:20.962 18:33:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.962 ************************************ 00:20:20.962 END TEST nvmf_async_init 00:20:20.962 ************************************ 00:20:20.962 18:33:36 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.962 18:33:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:20.962 18:33:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:20.962 18:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.962 ************************************ 00:20:20.962 START TEST dma 00:20:20.962 ************************************ 00:20:20.962 18:33:36 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:20.962 * Looking for test storage... 00:20:20.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:20.962 18:33:36 nvmf_tcp.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:20.962 18:33:36 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:20.962 18:33:36 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:20.962 18:33:36 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:20.962 18:33:36 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.962 18:33:36 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.962 18:33:36 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.962 18:33:36 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:20:20.962 18:33:36 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:20.962 18:33:36 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:20.962 18:33:36 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:20.962 18:33:36 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:20:20.962 00:20:20.962 real 0m0.101s 00:20:20.962 user 0m0.047s 00:20:20.962 sys 0m0.057s 00:20:20.962 18:33:36 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:20.962 18:33:36 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:20:20.962 ************************************ 00:20:20.962 END TEST dma 00:20:20.962 ************************************ 00:20:20.962 18:33:36 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:20.962 18:33:36 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:20.962 18:33:36 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:20.962 18:33:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:20.962 ************************************ 00:20:20.962 START TEST nvmf_identify 00:20:20.962 ************************************ 00:20:20.963 18:33:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:21.221 * Looking for test storage... 00:20:21.221 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.221 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:21.222 18:33:36 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:21.222 Cannot find device "nvmf_tgt_br" 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@155 -- # true 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.222 Cannot find device "nvmf_tgt_br2" 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@156 -- # true 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:21.222 Cannot find device "nvmf_tgt_br" 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@158 -- # true 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:21.222 Cannot find device "nvmf_tgt_br2" 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@159 -- # true 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.222 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:21.222 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:21.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:21.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:20:21.480 00:20:21.480 --- 10.0.0.2 ping statistics --- 00:20:21.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.480 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:21.480 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:21.480 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:21.480 00:20:21.480 --- 10.0.0.3 ping statistics --- 00:20:21.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.480 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:21.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:21.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:21.480 00:20:21.480 --- 10.0.0.1 ping statistics --- 00:20:21.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:21.480 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@433 -- # return 0 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=88246 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 88246 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 88246 ']' 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:21.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:21.480 18:33:37 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:21.480 [2024-05-13 18:33:37.415796] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:21.480 [2024-05-13 18:33:37.415911] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:21.737 [2024-05-13 18:33:37.560306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:21.994 [2024-05-13 18:33:37.694994] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:21.994 [2024-05-13 18:33:37.695059] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:21.994 [2024-05-13 18:33:37.695074] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:21.994 [2024-05-13 18:33:37.695085] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:21.994 [2024-05-13 18:33:37.695095] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:21.994 [2024-05-13 18:33:37.695246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.994 [2024-05-13 18:33:37.696003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.994 [2024-05-13 18:33:37.696088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:21.994 [2024-05-13 18:33:37.696099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.561 [2024-05-13 18:33:38.443702] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.561 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.821 Malloc0 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.821 [2024-05-13 18:33:38.538542] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:22.821 [2024-05-13 18:33:38.538865] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:22.821 [ 00:20:22.821 { 00:20:22.821 "allow_any_host": true, 00:20:22.821 "hosts": [], 00:20:22.821 "listen_addresses": [ 00:20:22.821 { 00:20:22.821 "adrfam": "IPv4", 00:20:22.821 "traddr": "10.0.0.2", 00:20:22.821 "trsvcid": "4420", 00:20:22.821 "trtype": "TCP" 00:20:22.821 } 00:20:22.821 ], 00:20:22.821 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:22.821 "subtype": "Discovery" 00:20:22.821 }, 00:20:22.821 { 00:20:22.821 "allow_any_host": true, 00:20:22.821 "hosts": [], 00:20:22.821 "listen_addresses": [ 00:20:22.821 { 00:20:22.821 "adrfam": "IPv4", 00:20:22.821 "traddr": "10.0.0.2", 00:20:22.821 "trsvcid": "4420", 00:20:22.821 "trtype": "TCP" 00:20:22.821 } 00:20:22.821 ], 00:20:22.821 "max_cntlid": 65519, 00:20:22.821 "max_namespaces": 32, 00:20:22.821 "min_cntlid": 1, 00:20:22.821 "model_number": "SPDK bdev Controller", 00:20:22.821 "namespaces": [ 00:20:22.821 { 00:20:22.821 "bdev_name": "Malloc0", 00:20:22.821 "eui64": "ABCDEF0123456789", 00:20:22.821 "name": "Malloc0", 00:20:22.821 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:22.821 "nsid": 1, 00:20:22.821 "uuid": "6697e044-1ea4-4655-baa8-3eea7c3c7e34" 00:20:22.821 } 00:20:22.821 ], 00:20:22.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.821 "serial_number": "SPDK00000000000001", 00:20:22.821 "subtype": "NVMe" 00:20:22.821 } 00:20:22.821 ] 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.821 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:22.821 [2024-05-13 18:33:38.588829] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:22.821 [2024-05-13 18:33:38.589056] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88305 ] 00:20:22.821 [2024-05-13 18:33:38.725867] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:22.821 [2024-05-13 18:33:38.725947] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:22.821 [2024-05-13 18:33:38.725955] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:22.821 [2024-05-13 18:33:38.725972] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:22.821 [2024-05-13 18:33:38.725982] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:22.821 [2024-05-13 18:33:38.726144] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:22.821 [2024-05-13 18:33:38.726198] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x209e280 0 00:20:22.821 [2024-05-13 18:33:38.738592] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:22.821 [2024-05-13 18:33:38.738619] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:22.821 [2024-05-13 18:33:38.738625] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:22.821 [2024-05-13 18:33:38.738629] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:22.821 [2024-05-13 18:33:38.738680] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.821 [2024-05-13 18:33:38.738688] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.821 [2024-05-13 18:33:38.738693] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.821 [2024-05-13 18:33:38.738710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:22.821 [2024-05-13 18:33:38.738746] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.821 [2024-05-13 18:33:38.746595] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.746619] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.746625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.746630] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.746646] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:22.822 [2024-05-13 18:33:38.746656] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:22.822 [2024-05-13 18:33:38.746662] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:22.822 [2024-05-13 18:33:38.746679] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.746685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.746690] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.822 [2024-05-13 18:33:38.746701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.822 [2024-05-13 18:33:38.746732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.822 [2024-05-13 18:33:38.746820] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.746827] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.746831] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.746835] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.746844] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:22.822 [2024-05-13 18:33:38.746856] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:22.822 [2024-05-13 18:33:38.746868] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.746876] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.746882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.822 [2024-05-13 18:33:38.746894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.822 [2024-05-13 18:33:38.746927] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.822 [2024-05-13 18:33:38.746989] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.747005] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.747010] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747014] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.747022] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:22.822 [2024-05-13 18:33:38.747032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:22.822 [2024-05-13 18:33:38.747041] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747048] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747054] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.822 [2024-05-13 18:33:38.747066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.822 [2024-05-13 18:33:38.747100] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.822 [2024-05-13 18:33:38.747153] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.747162] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.747166] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747172] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.747183] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:22.822 [2024-05-13 18:33:38.747201] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747210] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747216] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.822 [2024-05-13 18:33:38.747226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.822 [2024-05-13 18:33:38.747248] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.822 [2024-05-13 18:33:38.747309] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.747334] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.747341] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747346] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.747352] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:22.822 [2024-05-13 18:33:38.747358] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:22.822 [2024-05-13 18:33:38.747368] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:22.822 [2024-05-13 18:33:38.747474] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:22.822 [2024-05-13 18:33:38.747482] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:22.822 [2024-05-13 18:33:38.747493] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747497] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747501] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.822 [2024-05-13 18:33:38.747511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.822 [2024-05-13 18:33:38.747544] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.822 [2024-05-13 18:33:38.747614] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.747629] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.747636] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747644] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.747653] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:22.822 [2024-05-13 18:33:38.747665] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747670] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747674] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.822 [2024-05-13 18:33:38.747683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.822 [2024-05-13 18:33:38.747715] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.822 [2024-05-13 18:33:38.747776] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.747788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.747795] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747799] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.747805] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:22.822 [2024-05-13 18:33:38.747811] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:22.822 [2024-05-13 18:33:38.747821] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:22.822 [2024-05-13 18:33:38.747855] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:22.822 [2024-05-13 18:33:38.747869] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.747874] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.822 [2024-05-13 18:33:38.747883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.822 [2024-05-13 18:33:38.747917] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.822 [2024-05-13 18:33:38.748010] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.822 [2024-05-13 18:33:38.748023] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.822 [2024-05-13 18:33:38.748030] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.748037] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x209e280): datao=0, datal=4096, cccid=0 00:20:22.822 [2024-05-13 18:33:38.748046] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e6950) on tqpair(0x209e280): expected_datao=0, payload_size=4096 00:20:22.822 [2024-05-13 18:33:38.748055] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.748068] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.748074] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.748084] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.822 [2024-05-13 18:33:38.748090] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.822 [2024-05-13 18:33:38.748096] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.748103] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.822 [2024-05-13 18:33:38.748114] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:22.822 [2024-05-13 18:33:38.748121] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:22.822 [2024-05-13 18:33:38.748128] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:22.822 [2024-05-13 18:33:38.748137] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:22.822 [2024-05-13 18:33:38.748146] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:22.822 [2024-05-13 18:33:38.748155] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:22.822 [2024-05-13 18:33:38.748165] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:22.822 [2024-05-13 18:33:38.748181] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.748189] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.822 [2024-05-13 18:33:38.748193] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.748202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:22.823 [2024-05-13 18:33:38.748232] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.823 [2024-05-13 18:33:38.748297] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.823 [2024-05-13 18:33:38.748306] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.823 [2024-05-13 18:33:38.748310] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748314] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6950) on tqpair=0x209e280 00:20:22.823 [2024-05-13 18:33:38.748324] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748328] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748332] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.748339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.823 [2024-05-13 18:33:38.748350] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748357] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748364] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.748374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.823 [2024-05-13 18:33:38.748385] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748392] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748396] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.748402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.823 [2024-05-13 18:33:38.748409] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748413] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748417] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.748423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.823 [2024-05-13 18:33:38.748431] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:22.823 [2024-05-13 18:33:38.748449] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:22.823 [2024-05-13 18:33:38.748463] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748470] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.748481] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.823 [2024-05-13 18:33:38.748512] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6950, cid 0, qid 0 00:20:22.823 [2024-05-13 18:33:38.748522] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6ab0, cid 1, qid 0 00:20:22.823 [2024-05-13 18:33:38.748530] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6c10, cid 2, qid 0 00:20:22.823 [2024-05-13 18:33:38.748538] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:22.823 [2024-05-13 18:33:38.748546] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6ed0, cid 4, qid 0 00:20:22.823 [2024-05-13 18:33:38.748653] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.823 [2024-05-13 18:33:38.748668] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.823 [2024-05-13 18:33:38.748675] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748682] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6ed0) on tqpair=0x209e280 00:20:22.823 [2024-05-13 18:33:38.748692] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:22.823 [2024-05-13 18:33:38.748699] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:22.823 [2024-05-13 18:33:38.748712] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748717] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.748728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.823 [2024-05-13 18:33:38.748769] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6ed0, cid 4, qid 0 00:20:22.823 [2024-05-13 18:33:38.748838] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.823 [2024-05-13 18:33:38.748848] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.823 [2024-05-13 18:33:38.748853] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748858] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x209e280): datao=0, datal=4096, cccid=4 00:20:22.823 [2024-05-13 18:33:38.748865] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e6ed0) on tqpair(0x209e280): expected_datao=0, payload_size=4096 00:20:22.823 [2024-05-13 18:33:38.748872] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748881] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748888] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748902] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.823 [2024-05-13 18:33:38.748914] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.823 [2024-05-13 18:33:38.748920] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748927] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6ed0) on tqpair=0x209e280 00:20:22.823 [2024-05-13 18:33:38.748948] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:22.823 [2024-05-13 18:33:38.748985] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.748995] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.749006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.823 [2024-05-13 18:33:38.749015] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.749019] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.749023] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x209e280) 00:20:22.823 [2024-05-13 18:33:38.749031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.823 [2024-05-13 18:33:38.749073] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6ed0, cid 4, qid 0 00:20:22.823 [2024-05-13 18:33:38.749084] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e7030, cid 5, qid 0 00:20:22.823 [2024-05-13 18:33:38.749187] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:22.823 [2024-05-13 18:33:38.749200] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:22.823 [2024-05-13 18:33:38.749206] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.749213] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x209e280): datao=0, datal=1024, cccid=4 00:20:22.823 [2024-05-13 18:33:38.749220] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e6ed0) on tqpair(0x209e280): expected_datao=0, payload_size=1024 00:20:22.823 [2024-05-13 18:33:38.749225] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.749233] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.749237] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.749243] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:22.823 [2024-05-13 18:33:38.749252] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:22.823 [2024-05-13 18:33:38.749259] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:22.823 [2024-05-13 18:33:38.749265] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e7030) on tqpair=0x209e280 00:20:23.084 [2024-05-13 18:33:38.789668] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.084 [2024-05-13 18:33:38.789721] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.084 [2024-05-13 18:33:38.789727] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.789733] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6ed0) on tqpair=0x209e280 00:20:23.084 [2024-05-13 18:33:38.789775] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.789782] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x209e280) 00:20:23.084 [2024-05-13 18:33:38.789797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.084 [2024-05-13 18:33:38.789840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6ed0, cid 4, qid 0 00:20:23.084 [2024-05-13 18:33:38.789948] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.084 [2024-05-13 18:33:38.789955] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.084 [2024-05-13 18:33:38.789959] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.789963] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x209e280): datao=0, datal=3072, cccid=4 00:20:23.084 [2024-05-13 18:33:38.789969] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e6ed0) on tqpair(0x209e280): expected_datao=0, payload_size=3072 00:20:23.084 [2024-05-13 18:33:38.789974] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.789985] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.789992] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.790005] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.084 [2024-05-13 18:33:38.790015] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.084 [2024-05-13 18:33:38.790021] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.790028] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6ed0) on tqpair=0x209e280 00:20:23.084 [2024-05-13 18:33:38.790046] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.790055] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x209e280) 00:20:23.084 [2024-05-13 18:33:38.790068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.084 [2024-05-13 18:33:38.790105] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6ed0, cid 4, qid 0 00:20:23.084 [2024-05-13 18:33:38.790182] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.084 [2024-05-13 18:33:38.790196] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.084 [2024-05-13 18:33:38.790203] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.790210] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x209e280): datao=0, datal=8, cccid=4 00:20:23.084 [2024-05-13 18:33:38.790218] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x20e6ed0) on tqpair(0x209e280): expected_datao=0, payload_size=8 00:20:23.084 [2024-05-13 18:33:38.790226] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.790234] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.790238] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.834622] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.084 [2024-05-13 18:33:38.834665] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.084 [2024-05-13 18:33:38.834671] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.084 [2024-05-13 18:33:38.834677] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6ed0) on tqpair=0x209e280 00:20:23.084 ===================================================== 00:20:23.084 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:23.084 ===================================================== 00:20:23.084 Controller Capabilities/Features 00:20:23.084 ================================ 00:20:23.084 Vendor ID: 0000 00:20:23.084 Subsystem Vendor ID: 0000 00:20:23.084 Serial Number: .................... 00:20:23.084 Model Number: ........................................ 00:20:23.084 Firmware Version: 24.05 00:20:23.084 Recommended Arb Burst: 0 00:20:23.084 IEEE OUI Identifier: 00 00 00 00:20:23.084 Multi-path I/O 00:20:23.084 May have multiple subsystem ports: No 00:20:23.084 May have multiple controllers: No 00:20:23.084 Associated with SR-IOV VF: No 00:20:23.084 Max Data Transfer Size: 131072 00:20:23.084 Max Number of Namespaces: 0 00:20:23.084 Max Number of I/O Queues: 1024 00:20:23.084 NVMe Specification Version (VS): 1.3 00:20:23.084 NVMe Specification Version (Identify): 1.3 00:20:23.084 Maximum Queue Entries: 128 00:20:23.084 Contiguous Queues Required: Yes 00:20:23.084 Arbitration Mechanisms Supported 00:20:23.084 Weighted Round Robin: Not Supported 00:20:23.084 Vendor Specific: Not Supported 00:20:23.084 Reset Timeout: 15000 ms 00:20:23.084 Doorbell Stride: 4 bytes 00:20:23.084 NVM Subsystem Reset: Not Supported 00:20:23.084 Command Sets Supported 00:20:23.084 NVM Command Set: Supported 00:20:23.084 Boot Partition: Not Supported 00:20:23.084 Memory Page Size Minimum: 4096 bytes 00:20:23.084 Memory Page Size Maximum: 4096 bytes 00:20:23.084 Persistent Memory Region: Not Supported 00:20:23.084 Optional Asynchronous Events Supported 00:20:23.084 Namespace Attribute Notices: Not Supported 00:20:23.084 Firmware Activation Notices: Not Supported 00:20:23.084 ANA Change Notices: Not Supported 00:20:23.084 PLE Aggregate Log Change Notices: Not Supported 00:20:23.084 LBA Status Info Alert Notices: Not Supported 00:20:23.084 EGE Aggregate Log Change Notices: Not Supported 00:20:23.084 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.084 Zone Descriptor Change Notices: Not Supported 00:20:23.084 Discovery Log Change Notices: Supported 00:20:23.084 Controller Attributes 00:20:23.084 128-bit Host Identifier: Not Supported 00:20:23.084 Non-Operational Permissive Mode: Not Supported 00:20:23.084 NVM Sets: Not Supported 00:20:23.084 Read Recovery Levels: Not Supported 00:20:23.084 Endurance Groups: Not Supported 00:20:23.084 Predictable Latency Mode: Not Supported 00:20:23.084 Traffic Based Keep ALive: Not Supported 00:20:23.084 Namespace Granularity: Not Supported 00:20:23.084 SQ Associations: Not Supported 00:20:23.084 UUID List: Not Supported 00:20:23.084 Multi-Domain Subsystem: Not Supported 00:20:23.084 Fixed Capacity Management: Not Supported 00:20:23.084 Variable Capacity Management: Not Supported 00:20:23.084 Delete Endurance Group: Not Supported 00:20:23.084 Delete NVM Set: Not Supported 00:20:23.084 Extended LBA Formats Supported: Not Supported 00:20:23.084 Flexible Data Placement Supported: Not Supported 00:20:23.084 00:20:23.084 Controller Memory Buffer Support 00:20:23.084 ================================ 00:20:23.084 Supported: No 00:20:23.084 00:20:23.084 Persistent Memory Region Support 00:20:23.084 ================================ 00:20:23.084 Supported: No 00:20:23.084 00:20:23.084 Admin Command Set Attributes 00:20:23.084 ============================ 00:20:23.084 Security Send/Receive: Not Supported 00:20:23.084 Format NVM: Not Supported 00:20:23.084 Firmware Activate/Download: Not Supported 00:20:23.084 Namespace Management: Not Supported 00:20:23.084 Device Self-Test: Not Supported 00:20:23.084 Directives: Not Supported 00:20:23.084 NVMe-MI: Not Supported 00:20:23.084 Virtualization Management: Not Supported 00:20:23.084 Doorbell Buffer Config: Not Supported 00:20:23.084 Get LBA Status Capability: Not Supported 00:20:23.084 Command & Feature Lockdown Capability: Not Supported 00:20:23.084 Abort Command Limit: 1 00:20:23.084 Async Event Request Limit: 4 00:20:23.084 Number of Firmware Slots: N/A 00:20:23.084 Firmware Slot 1 Read-Only: N/A 00:20:23.084 Firmware Activation Without Reset: N/A 00:20:23.084 Multiple Update Detection Support: N/A 00:20:23.084 Firmware Update Granularity: No Information Provided 00:20:23.084 Per-Namespace SMART Log: No 00:20:23.084 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.084 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:23.084 Command Effects Log Page: Not Supported 00:20:23.084 Get Log Page Extended Data: Supported 00:20:23.084 Telemetry Log Pages: Not Supported 00:20:23.084 Persistent Event Log Pages: Not Supported 00:20:23.084 Supported Log Pages Log Page: May Support 00:20:23.084 Commands Supported & Effects Log Page: Not Supported 00:20:23.084 Feature Identifiers & Effects Log Page:May Support 00:20:23.085 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.085 Data Area 4 for Telemetry Log: Not Supported 00:20:23.085 Error Log Page Entries Supported: 128 00:20:23.085 Keep Alive: Not Supported 00:20:23.085 00:20:23.085 NVM Command Set Attributes 00:20:23.085 ========================== 00:20:23.085 Submission Queue Entry Size 00:20:23.085 Max: 1 00:20:23.085 Min: 1 00:20:23.085 Completion Queue Entry Size 00:20:23.085 Max: 1 00:20:23.085 Min: 1 00:20:23.085 Number of Namespaces: 0 00:20:23.085 Compare Command: Not Supported 00:20:23.085 Write Uncorrectable Command: Not Supported 00:20:23.085 Dataset Management Command: Not Supported 00:20:23.085 Write Zeroes Command: Not Supported 00:20:23.085 Set Features Save Field: Not Supported 00:20:23.085 Reservations: Not Supported 00:20:23.085 Timestamp: Not Supported 00:20:23.085 Copy: Not Supported 00:20:23.085 Volatile Write Cache: Not Present 00:20:23.085 Atomic Write Unit (Normal): 1 00:20:23.085 Atomic Write Unit (PFail): 1 00:20:23.085 Atomic Compare & Write Unit: 1 00:20:23.085 Fused Compare & Write: Supported 00:20:23.085 Scatter-Gather List 00:20:23.085 SGL Command Set: Supported 00:20:23.085 SGL Keyed: Supported 00:20:23.085 SGL Bit Bucket Descriptor: Not Supported 00:20:23.085 SGL Metadata Pointer: Not Supported 00:20:23.085 Oversized SGL: Not Supported 00:20:23.085 SGL Metadata Address: Not Supported 00:20:23.085 SGL Offset: Supported 00:20:23.085 Transport SGL Data Block: Not Supported 00:20:23.085 Replay Protected Memory Block: Not Supported 00:20:23.085 00:20:23.085 Firmware Slot Information 00:20:23.085 ========================= 00:20:23.085 Active slot: 0 00:20:23.085 00:20:23.085 00:20:23.085 Error Log 00:20:23.085 ========= 00:20:23.085 00:20:23.085 Active Namespaces 00:20:23.085 ================= 00:20:23.085 Discovery Log Page 00:20:23.085 ================== 00:20:23.085 Generation Counter: 2 00:20:23.085 Number of Records: 2 00:20:23.085 Record Format: 0 00:20:23.085 00:20:23.085 Discovery Log Entry 0 00:20:23.085 ---------------------- 00:20:23.085 Transport Type: 3 (TCP) 00:20:23.085 Address Family: 1 (IPv4) 00:20:23.085 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:23.085 Entry Flags: 00:20:23.085 Duplicate Returned Information: 1 00:20:23.085 Explicit Persistent Connection Support for Discovery: 1 00:20:23.085 Transport Requirements: 00:20:23.085 Secure Channel: Not Required 00:20:23.085 Port ID: 0 (0x0000) 00:20:23.085 Controller ID: 65535 (0xffff) 00:20:23.085 Admin Max SQ Size: 128 00:20:23.085 Transport Service Identifier: 4420 00:20:23.085 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:23.085 Transport Address: 10.0.0.2 00:20:23.085 Discovery Log Entry 1 00:20:23.085 ---------------------- 00:20:23.085 Transport Type: 3 (TCP) 00:20:23.085 Address Family: 1 (IPv4) 00:20:23.085 Subsystem Type: 2 (NVM Subsystem) 00:20:23.085 Entry Flags: 00:20:23.085 Duplicate Returned Information: 0 00:20:23.085 Explicit Persistent Connection Support for Discovery: 0 00:20:23.085 Transport Requirements: 00:20:23.085 Secure Channel: Not Required 00:20:23.085 Port ID: 0 (0x0000) 00:20:23.085 Controller ID: 65535 (0xffff) 00:20:23.085 Admin Max SQ Size: 128 00:20:23.085 Transport Service Identifier: 4420 00:20:23.085 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:23.085 Transport Address: 10.0.0.2 [2024-05-13 18:33:38.834811] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:23.085 [2024-05-13 18:33:38.834831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.085 [2024-05-13 18:33:38.834839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.085 [2024-05-13 18:33:38.834846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.085 [2024-05-13 18:33:38.834853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.085 [2024-05-13 18:33:38.834870] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.834878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.834885] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.085 [2024-05-13 18:33:38.834903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.085 [2024-05-13 18:33:38.834946] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.085 [2024-05-13 18:33:38.835023] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.085 [2024-05-13 18:33:38.835037] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.085 [2024-05-13 18:33:38.835043] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835048] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.085 [2024-05-13 18:33:38.835059] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835067] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.085 [2024-05-13 18:33:38.835077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.085 [2024-05-13 18:33:38.835110] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.085 [2024-05-13 18:33:38.835191] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.085 [2024-05-13 18:33:38.835212] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.085 [2024-05-13 18:33:38.835219] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835224] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.085 [2024-05-13 18:33:38.835233] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:23.085 [2024-05-13 18:33:38.835241] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:23.085 [2024-05-13 18:33:38.835259] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835267] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835272] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.085 [2024-05-13 18:33:38.835281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.085 [2024-05-13 18:33:38.835312] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.085 [2024-05-13 18:33:38.835369] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.085 [2024-05-13 18:33:38.835378] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.085 [2024-05-13 18:33:38.835382] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835386] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.085 [2024-05-13 18:33:38.835404] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835414] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.085 [2024-05-13 18:33:38.835433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.085 [2024-05-13 18:33:38.835462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.085 [2024-05-13 18:33:38.835522] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.085 [2024-05-13 18:33:38.835543] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.085 [2024-05-13 18:33:38.835549] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835555] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.085 [2024-05-13 18:33:38.835591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835605] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835609] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.085 [2024-05-13 18:33:38.835618] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.085 [2024-05-13 18:33:38.835642] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.085 [2024-05-13 18:33:38.835703] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.085 [2024-05-13 18:33:38.835723] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.085 [2024-05-13 18:33:38.835731] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835737] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.085 [2024-05-13 18:33:38.835751] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835756] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835760] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.085 [2024-05-13 18:33:38.835771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.085 [2024-05-13 18:33:38.835801] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.085 [2024-05-13 18:33:38.835862] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.085 [2024-05-13 18:33:38.835881] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.085 [2024-05-13 18:33:38.835898] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835905] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.085 [2024-05-13 18:33:38.835921] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.085 [2024-05-13 18:33:38.835927] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.835931] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.835939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.835967] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.836023] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.836033] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.836037] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836044] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.836063] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836070] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836074] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.836082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.836110] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.836162] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.836181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.836188] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836195] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.836209] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836214] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836218] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.836227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.836258] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.836310] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.836322] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.836329] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836335] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.836350] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836355] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836359] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.836367] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.836392] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.836449] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.836468] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.836473] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836478] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.836496] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836505] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836512] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.836524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.836553] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.836622] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.836636] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.836641] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836645] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.836660] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836674] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.836686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.836717] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.836784] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.836799] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.836803] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836808] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.836820] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836829] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.836840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.836870] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.836922] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.836936] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.836940] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836944] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.836960] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836967] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.836971] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.836983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.837026] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.837080] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.837095] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.837100] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837104] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.837117] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837123] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837129] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.837141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.837173] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.837229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.837240] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.837246] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837251] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.837263] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837268] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837272] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.837282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.837308] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.837361] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.837375] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.837379] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837384] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.837399] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837406] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837410] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.837422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.837454] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.837510] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.837522] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.837528] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837533] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.837545] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837550] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837555] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.837566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.086 [2024-05-13 18:33:38.837624] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.086 [2024-05-13 18:33:38.837682] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.086 [2024-05-13 18:33:38.837697] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.086 [2024-05-13 18:33:38.837701] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837706] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.086 [2024-05-13 18:33:38.837724] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837734] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.086 [2024-05-13 18:33:38.837740] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.086 [2024-05-13 18:33:38.837752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.087 [2024-05-13 18:33:38.837783] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.087 [2024-05-13 18:33:38.837836] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.087 [2024-05-13 18:33:38.837846] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.087 [2024-05-13 18:33:38.837850] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.837854] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.087 [2024-05-13 18:33:38.837868] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.837875] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.837880] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.087 [2024-05-13 18:33:38.837890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.087 [2024-05-13 18:33:38.837921] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.087 [2024-05-13 18:33:38.837973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.087 [2024-05-13 18:33:38.837987] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.087 [2024-05-13 18:33:38.837992] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.837996] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.087 [2024-05-13 18:33:38.838009] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838014] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838018] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.087 [2024-05-13 18:33:38.838029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.087 [2024-05-13 18:33:38.838055] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.087 [2024-05-13 18:33:38.838110] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.087 [2024-05-13 18:33:38.838124] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.087 [2024-05-13 18:33:38.838128] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838132] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.087 [2024-05-13 18:33:38.838147] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.087 [2024-05-13 18:33:38.838171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.087 [2024-05-13 18:33:38.838202] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.087 [2024-05-13 18:33:38.838264] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.087 [2024-05-13 18:33:38.838283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.087 [2024-05-13 18:33:38.838289] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838293] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.087 [2024-05-13 18:33:38.838307] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838312] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838316] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.087 [2024-05-13 18:33:38.838327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.087 [2024-05-13 18:33:38.838356] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.087 [2024-05-13 18:33:38.838414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.087 [2024-05-13 18:33:38.838432] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.087 [2024-05-13 18:33:38.838437] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838441] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.087 [2024-05-13 18:33:38.838455] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838461] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.838467] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.087 [2024-05-13 18:33:38.838477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.087 [2024-05-13 18:33:38.838505] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.087 [2024-05-13 18:33:38.838557] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.087 [2024-05-13 18:33:38.838566] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.087 [2024-05-13 18:33:38.842591] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.842601] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.087 [2024-05-13 18:33:38.842621] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.842626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.842631] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x209e280) 00:20:23.087 [2024-05-13 18:33:38.842640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.087 [2024-05-13 18:33:38.842669] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20e6d70, cid 3, qid 0 00:20:23.087 [2024-05-13 18:33:38.842742] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.087 [2024-05-13 18:33:38.842749] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.087 [2024-05-13 18:33:38.842753] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.087 [2024-05-13 18:33:38.842757] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x20e6d70) on tqpair=0x209e280 00:20:23.087 [2024-05-13 18:33:38.842766] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:23.087 00:20:23.087 18:33:38 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:23.087 [2024-05-13 18:33:38.875240] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:23.087 [2024-05-13 18:33:38.875281] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88307 ] 00:20:23.087 [2024-05-13 18:33:39.012788] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:23.087 [2024-05-13 18:33:39.012859] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:23.087 [2024-05-13 18:33:39.012868] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:23.087 [2024-05-13 18:33:39.012885] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:23.087 [2024-05-13 18:33:39.012896] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:23.087 [2024-05-13 18:33:39.013045] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:23.087 [2024-05-13 18:33:39.013106] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9ba280 0 00:20:23.350 [2024-05-13 18:33:39.025596] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:23.350 [2024-05-13 18:33:39.025623] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:23.350 [2024-05-13 18:33:39.025630] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:23.350 [2024-05-13 18:33:39.025634] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:23.350 [2024-05-13 18:33:39.025684] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.025692] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.025696] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.350 [2024-05-13 18:33:39.025714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:23.350 [2024-05-13 18:33:39.025747] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.350 [2024-05-13 18:33:39.033594] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.350 [2024-05-13 18:33:39.033619] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.350 [2024-05-13 18:33:39.033625] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.033630] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.350 [2024-05-13 18:33:39.033641] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:23.350 [2024-05-13 18:33:39.033652] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:23.350 [2024-05-13 18:33:39.033659] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:23.350 [2024-05-13 18:33:39.033676] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.033682] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.033686] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.350 [2024-05-13 18:33:39.033698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.350 [2024-05-13 18:33:39.033728] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.350 [2024-05-13 18:33:39.033817] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.350 [2024-05-13 18:33:39.033824] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.350 [2024-05-13 18:33:39.033828] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.033832] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.350 [2024-05-13 18:33:39.033838] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:23.350 [2024-05-13 18:33:39.033846] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:23.350 [2024-05-13 18:33:39.033855] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.033859] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.033863] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.350 [2024-05-13 18:33:39.033875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.350 [2024-05-13 18:33:39.033904] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.350 [2024-05-13 18:33:39.034307] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.350 [2024-05-13 18:33:39.034319] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.350 [2024-05-13 18:33:39.034323] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034327] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.350 [2024-05-13 18:33:39.034341] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:23.350 [2024-05-13 18:33:39.034356] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:23.350 [2024-05-13 18:33:39.034366] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034371] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034375] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.350 [2024-05-13 18:33:39.034383] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.350 [2024-05-13 18:33:39.034408] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.350 [2024-05-13 18:33:39.034472] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.350 [2024-05-13 18:33:39.034479] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.350 [2024-05-13 18:33:39.034483] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034488] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.350 [2024-05-13 18:33:39.034494] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:23.350 [2024-05-13 18:33:39.034505] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034509] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034513] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.350 [2024-05-13 18:33:39.034521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.350 [2024-05-13 18:33:39.034540] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.350 [2024-05-13 18:33:39.034609] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.350 [2024-05-13 18:33:39.034618] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.350 [2024-05-13 18:33:39.034622] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034627] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.350 [2024-05-13 18:33:39.034632] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:23.350 [2024-05-13 18:33:39.034638] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:23.350 [2024-05-13 18:33:39.034647] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:23.350 [2024-05-13 18:33:39.034753] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:23.350 [2024-05-13 18:33:39.034758] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:23.350 [2024-05-13 18:33:39.034768] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034772] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.350 [2024-05-13 18:33:39.034776] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.350 [2024-05-13 18:33:39.034784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.350 [2024-05-13 18:33:39.034806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.350 [2024-05-13 18:33:39.034866] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.350 [2024-05-13 18:33:39.034878] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.350 [2024-05-13 18:33:39.034882] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.034886] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.351 [2024-05-13 18:33:39.034891] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:23.351 [2024-05-13 18:33:39.034902] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.034906] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.034910] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.034918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.351 [2024-05-13 18:33:39.034940] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.351 [2024-05-13 18:33:39.035122] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.351 [2024-05-13 18:33:39.035132] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.351 [2024-05-13 18:33:39.035136] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035140] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.351 [2024-05-13 18:33:39.035145] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:23.351 [2024-05-13 18:33:39.035151] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.035164] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:23.351 [2024-05-13 18:33:39.035190] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.035208] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035214] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.035223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.351 [2024-05-13 18:33:39.035247] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.351 [2024-05-13 18:33:39.035711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.351 [2024-05-13 18:33:39.035730] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.351 [2024-05-13 18:33:39.035735] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035739] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=4096, cccid=0 00:20:23.351 [2024-05-13 18:33:39.035745] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa02950) on tqpair(0x9ba280): expected_datao=0, payload_size=4096 00:20:23.351 [2024-05-13 18:33:39.035750] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035760] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035765] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035775] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.351 [2024-05-13 18:33:39.035782] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.351 [2024-05-13 18:33:39.035785] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035790] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.351 [2024-05-13 18:33:39.035800] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:23.351 [2024-05-13 18:33:39.035806] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:23.351 [2024-05-13 18:33:39.035811] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:23.351 [2024-05-13 18:33:39.035816] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:23.351 [2024-05-13 18:33:39.035821] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:23.351 [2024-05-13 18:33:39.035827] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.035836] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.035850] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035856] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.035869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.351 [2024-05-13 18:33:39.035894] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.351 [2024-05-13 18:33:39.035961] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.351 [2024-05-13 18:33:39.035972] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.351 [2024-05-13 18:33:39.035976] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035980] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02950) on tqpair=0x9ba280 00:20:23.351 [2024-05-13 18:33:39.035989] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035994] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.035998] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.036009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.351 [2024-05-13 18:33:39.036020] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036028] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036034] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.036045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.351 [2024-05-13 18:33:39.036055] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036060] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036063] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.036070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.351 [2024-05-13 18:33:39.036077] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036089] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036093] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.036099] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.351 [2024-05-13 18:33:39.036105] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.036124] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.036136] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036144] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.036155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.351 [2024-05-13 18:33:39.036190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02950, cid 0, qid 0 00:20:23.351 [2024-05-13 18:33:39.036199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ab0, cid 1, qid 0 00:20:23.351 [2024-05-13 18:33:39.036204] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02c10, cid 2, qid 0 00:20:23.351 [2024-05-13 18:33:39.036209] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.351 [2024-05-13 18:33:39.036215] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ed0, cid 4, qid 0 00:20:23.351 [2024-05-13 18:33:39.036730] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.351 [2024-05-13 18:33:39.036759] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.351 [2024-05-13 18:33:39.036765] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036769] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02ed0) on tqpair=0x9ba280 00:20:23.351 [2024-05-13 18:33:39.036775] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:23.351 [2024-05-13 18:33:39.036782] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.036797] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.036806] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.036813] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036818] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036822] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.036831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:23.351 [2024-05-13 18:33:39.036856] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ed0, cid 4, qid 0 00:20:23.351 [2024-05-13 18:33:39.036926] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.351 [2024-05-13 18:33:39.036936] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.351 [2024-05-13 18:33:39.036942] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.036949] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02ed0) on tqpair=0x9ba280 00:20:23.351 [2024-05-13 18:33:39.037019] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.037035] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:23.351 [2024-05-13 18:33:39.037048] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.037055] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9ba280) 00:20:23.351 [2024-05-13 18:33:39.037067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.351 [2024-05-13 18:33:39.037098] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ed0, cid 4, qid 0 00:20:23.351 [2024-05-13 18:33:39.037346] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.351 [2024-05-13 18:33:39.037365] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.351 [2024-05-13 18:33:39.037370] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.351 [2024-05-13 18:33:39.037376] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=4096, cccid=4 00:20:23.351 [2024-05-13 18:33:39.037384] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa02ed0) on tqpair(0x9ba280): expected_datao=0, payload_size=4096 00:20:23.352 [2024-05-13 18:33:39.037391] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.037403] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.037409] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.037457] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.037466] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.037470] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.037474] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02ed0) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.037495] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:23.352 [2024-05-13 18:33:39.037518] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.037536] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.037551] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.037558] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.037568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.041623] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ed0, cid 4, qid 0 00:20:23.352 [2024-05-13 18:33:39.041743] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.352 [2024-05-13 18:33:39.041751] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.352 [2024-05-13 18:33:39.041755] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.041759] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=4096, cccid=4 00:20:23.352 [2024-05-13 18:33:39.041765] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa02ed0) on tqpair(0x9ba280): expected_datao=0, payload_size=4096 00:20:23.352 [2024-05-13 18:33:39.041769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.041777] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.041782] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.041791] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.041797] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.041801] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.041805] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02ed0) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.041828] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.041843] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.041858] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.041866] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.041878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.041914] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ed0, cid 4, qid 0 00:20:23.352 [2024-05-13 18:33:39.042433] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.352 [2024-05-13 18:33:39.042453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.352 [2024-05-13 18:33:39.042458] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042463] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=4096, cccid=4 00:20:23.352 [2024-05-13 18:33:39.042468] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa02ed0) on tqpair(0x9ba280): expected_datao=0, payload_size=4096 00:20:23.352 [2024-05-13 18:33:39.042473] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042480] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042485] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042494] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.042501] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.042505] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042509] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02ed0) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.042519] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.042530] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.042544] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.042551] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.042557] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.042562] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:23.352 [2024-05-13 18:33:39.042567] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:23.352 [2024-05-13 18:33:39.042589] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:23.352 [2024-05-13 18:33:39.042620] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042629] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.042642] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.042655] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042662] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.042666] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.042673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:23.352 [2024-05-13 18:33:39.042710] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ed0, cid 4, qid 0 00:20:23.352 [2024-05-13 18:33:39.042721] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03030, cid 5, qid 0 00:20:23.352 [2024-05-13 18:33:39.043173] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.043192] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.043197] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.043201] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02ed0) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.043209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.043215] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.043219] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.043223] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa03030) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.043236] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.043240] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.043249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.043273] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03030, cid 5, qid 0 00:20:23.352 [2024-05-13 18:33:39.043340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.043350] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.043354] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.043360] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa03030) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.043375] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.043384] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.043396] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.043423] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03030, cid 5, qid 0 00:20:23.352 [2024-05-13 18:33:39.043973] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.043993] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.044001] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.044008] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa03030) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.044023] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.044028] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.044036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.044063] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03030, cid 5, qid 0 00:20:23.352 [2024-05-13 18:33:39.044249] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.352 [2024-05-13 18:33:39.044259] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.352 [2024-05-13 18:33:39.044263] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.044269] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa03030) on tqpair=0x9ba280 00:20:23.352 [2024-05-13 18:33:39.044292] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.044303] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.044315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.044325] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.044330] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.044337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.044345] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.044350] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x9ba280) 00:20:23.352 [2024-05-13 18:33:39.044360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.352 [2024-05-13 18:33:39.044370] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.352 [2024-05-13 18:33:39.044374] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9ba280) 00:20:23.353 [2024-05-13 18:33:39.044381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.353 [2024-05-13 18:33:39.044417] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03030, cid 5, qid 0 00:20:23.353 [2024-05-13 18:33:39.044430] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02ed0, cid 4, qid 0 00:20:23.353 [2024-05-13 18:33:39.044438] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa03190, cid 6, qid 0 00:20:23.353 [2024-05-13 18:33:39.044443] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa032f0, cid 7, qid 0 00:20:23.353 [2024-05-13 18:33:39.045064] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.353 [2024-05-13 18:33:39.045084] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.353 [2024-05-13 18:33:39.045089] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045094] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=8192, cccid=5 00:20:23.353 [2024-05-13 18:33:39.045099] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03030) on tqpair(0x9ba280): expected_datao=0, payload_size=8192 00:20:23.353 [2024-05-13 18:33:39.045104] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045123] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045129] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045135] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.353 [2024-05-13 18:33:39.045141] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.353 [2024-05-13 18:33:39.045146] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045149] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=512, cccid=4 00:20:23.353 [2024-05-13 18:33:39.045154] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa02ed0) on tqpair(0x9ba280): expected_datao=0, payload_size=512 00:20:23.353 [2024-05-13 18:33:39.045159] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045166] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045170] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045175] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.353 [2024-05-13 18:33:39.045182] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.353 [2024-05-13 18:33:39.045185] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045189] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=512, cccid=6 00:20:23.353 [2024-05-13 18:33:39.045194] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa03190) on tqpair(0x9ba280): expected_datao=0, payload_size=512 00:20:23.353 [2024-05-13 18:33:39.045198] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045205] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045209] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045215] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:23.353 [2024-05-13 18:33:39.045221] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:23.353 [2024-05-13 18:33:39.045224] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045228] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9ba280): datao=0, datal=4096, cccid=7 00:20:23.353 [2024-05-13 18:33:39.045233] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa032f0) on tqpair(0x9ba280): expected_datao=0, payload_size=4096 00:20:23.353 [2024-05-13 18:33:39.045237] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045245] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045251] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045268] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.353 [2024-05-13 18:33:39.045275] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.353 [2024-05-13 18:33:39.045279] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045285] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa03030) on tqpair=0x9ba280 00:20:23.353 [2024-05-13 18:33:39.045311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.353 [2024-05-13 18:33:39.045324] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.353 [2024-05-13 18:33:39.045331] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.353 [2024-05-13 18:33:39.045335] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02ed0) on tqpair=0x9ba280 00:20:23.353 [2024-05-13 18:33:39.045348] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.353 ===================================================== 00:20:23.353 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:23.353 ===================================================== 00:20:23.353 Controller Capabilities/Features 00:20:23.353 ================================ 00:20:23.353 Vendor ID: 8086 00:20:23.353 Subsystem Vendor ID: 8086 00:20:23.353 Serial Number: SPDK00000000000001 00:20:23.353 Model Number: SPDK bdev Controller 00:20:23.353 Firmware Version: 24.05 00:20:23.353 Recommended Arb Burst: 6 00:20:23.353 IEEE OUI Identifier: e4 d2 5c 00:20:23.353 Multi-path I/O 00:20:23.353 May have multiple subsystem ports: Yes 00:20:23.353 May have multiple controllers: Yes 00:20:23.353 Associated with SR-IOV VF: No 00:20:23.353 Max Data Transfer Size: 131072 00:20:23.353 Max Number of Namespaces: 32 00:20:23.353 Max Number of I/O Queues: 127 00:20:23.353 NVMe Specification Version (VS): 1.3 00:20:23.353 NVMe Specification Version (Identify): 1.3 00:20:23.353 Maximum Queue Entries: 128 00:20:23.353 Contiguous Queues Required: Yes 00:20:23.353 Arbitration Mechanisms Supported 00:20:23.353 Weighted Round Robin: Not Supported 00:20:23.353 Vendor Specific: Not Supported 00:20:23.353 Reset Timeout: 15000 ms 00:20:23.353 Doorbell Stride: 4 bytes 00:20:23.353 NVM Subsystem Reset: Not Supported 00:20:23.353 Command Sets Supported 00:20:23.353 NVM Command Set: Supported 00:20:23.353 Boot Partition: Not Supported 00:20:23.353 Memory Page Size Minimum: 4096 bytes 00:20:23.353 Memory Page Size Maximum: 4096 bytes 00:20:23.353 Persistent Memory Region: Not Supported 00:20:23.353 Optional Asynchronous Events Supported 00:20:23.353 Namespace Attribute Notices: Supported 00:20:23.353 Firmware Activation Notices: Not Supported 00:20:23.353 ANA Change Notices: Not Supported 00:20:23.353 PLE Aggregate Log Change Notices: Not Supported 00:20:23.353 LBA Status Info Alert Notices: Not Supported 00:20:23.353 EGE Aggregate Log Change Notices: Not Supported 00:20:23.353 Normal NVM Subsystem Shutdown event: Not Supported 00:20:23.353 Zone Descriptor Change Notices: Not Supported 00:20:23.353 Discovery Log Change Notices: Not Supported 00:20:23.353 Controller Attributes 00:20:23.353 128-bit Host Identifier: Supported 00:20:23.353 Non-Operational Permissive Mode: Not Supported 00:20:23.353 NVM Sets: Not Supported 00:20:23.353 Read Recovery Levels: Not Supported 00:20:23.353 Endurance Groups: Not Supported 00:20:23.353 Predictable Latency Mode: Not Supported 00:20:23.353 Traffic Based Keep ALive: Not Supported 00:20:23.353 Namespace Granularity: Not Supported 00:20:23.353 SQ Associations: Not Supported 00:20:23.353 UUID List: Not Supported 00:20:23.353 Multi-Domain Subsystem: Not Supported 00:20:23.353 Fixed Capacity Management: Not Supported 00:20:23.353 Variable Capacity Management: Not Supported 00:20:23.353 Delete Endurance Group: Not Supported 00:20:23.353 Delete NVM Set: Not Supported 00:20:23.353 Extended LBA Formats Supported: Not Supported 00:20:23.353 Flexible Data Placement Supported: Not Supported 00:20:23.353 00:20:23.353 Controller Memory Buffer Support 00:20:23.353 ================================ 00:20:23.353 Supported: No 00:20:23.353 00:20:23.353 Persistent Memory Region Support 00:20:23.353 ================================ 00:20:23.353 Supported: No 00:20:23.353 00:20:23.353 Admin Command Set Attributes 00:20:23.353 ============================ 00:20:23.353 Security Send/Receive: Not Supported 00:20:23.353 Format NVM: Not Supported 00:20:23.353 Firmware Activate/Download: Not Supported 00:20:23.353 Namespace Management: Not Supported 00:20:23.353 Device Self-Test: Not Supported 00:20:23.353 Directives: Not Supported 00:20:23.353 NVMe-MI: Not Supported 00:20:23.353 Virtualization Management: Not Supported 00:20:23.353 Doorbell Buffer Config: Not Supported 00:20:23.353 Get LBA Status Capability: Not Supported 00:20:23.353 Command & Feature Lockdown Capability: Not Supported 00:20:23.353 Abort Command Limit: 4 00:20:23.353 Async Event Request Limit: 4 00:20:23.353 Number of Firmware Slots: N/A 00:20:23.353 Firmware Slot 1 Read-Only: N/A 00:20:23.353 Firmware Activation Without Reset: N/A 00:20:23.353 Multiple Update Detection Support: N/A 00:20:23.353 Firmware Update Granularity: No Information Provided 00:20:23.353 Per-Namespace SMART Log: No 00:20:23.353 Asymmetric Namespace Access Log Page: Not Supported 00:20:23.353 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:23.353 Command Effects Log Page: Supported 00:20:23.353 Get Log Page Extended Data: Supported 00:20:23.353 Telemetry Log Pages: Not Supported 00:20:23.353 Persistent Event Log Pages: Not Supported 00:20:23.353 Supported Log Pages Log Page: May Support 00:20:23.353 Commands Supported & Effects Log Page: Not Supported 00:20:23.353 Feature Identifiers & Effects Log Page:May Support 00:20:23.353 NVMe-MI Commands & Effects Log Page: May Support 00:20:23.353 Data Area 4 for Telemetry Log: Not Supported 00:20:23.353 Error Log Page Entries Supported: 128 00:20:23.353 Keep Alive: Supported 00:20:23.353 Keep Alive Granularity: 10000 ms 00:20:23.353 00:20:23.353 NVM Command Set Attributes 00:20:23.353 ========================== 00:20:23.353 Submission Queue Entry Size 00:20:23.353 Max: 64 00:20:23.353 Min: 64 00:20:23.353 Completion Queue Entry Size 00:20:23.353 Max: 16 00:20:23.353 Min: 16 00:20:23.353 Number of Namespaces: 32 00:20:23.354 Compare Command: Supported 00:20:23.354 Write Uncorrectable Command: Not Supported 00:20:23.354 Dataset Management Command: Supported 00:20:23.354 Write Zeroes Command: Supported 00:20:23.354 Set Features Save Field: Not Supported 00:20:23.354 Reservations: Supported 00:20:23.354 Timestamp: Not Supported 00:20:23.354 Copy: Supported 00:20:23.354 Volatile Write Cache: Present 00:20:23.354 Atomic Write Unit (Normal): 1 00:20:23.354 Atomic Write Unit (PFail): 1 00:20:23.354 Atomic Compare & Write Unit: 1 00:20:23.354 Fused Compare & Write: Supported 00:20:23.354 Scatter-Gather List 00:20:23.354 SGL Command Set: Supported 00:20:23.354 SGL Keyed: Supported 00:20:23.354 SGL Bit Bucket Descriptor: Not Supported 00:20:23.354 SGL Metadata Pointer: Not Supported 00:20:23.354 Oversized SGL: Not Supported 00:20:23.354 SGL Metadata Address: Not Supported 00:20:23.354 SGL Offset: Supported 00:20:23.354 Transport SGL Data Block: Not Supported 00:20:23.354 Replay Protected Memory Block: Not Supported 00:20:23.354 00:20:23.354 Firmware Slot Information 00:20:23.354 ========================= 00:20:23.354 Active slot: 1 00:20:23.354 Slot 1 Firmware Revision: 24.05 00:20:23.354 00:20:23.354 00:20:23.354 Commands Supported and Effects 00:20:23.354 ============================== 00:20:23.354 Admin Commands 00:20:23.354 -------------- 00:20:23.354 Get Log Page (02h): Supported 00:20:23.354 Identify (06h): Supported 00:20:23.354 Abort (08h): Supported 00:20:23.354 Set Features (09h): Supported 00:20:23.354 Get Features (0Ah): Supported 00:20:23.354 Asynchronous Event Request (0Ch): Supported 00:20:23.354 Keep Alive (18h): Supported 00:20:23.354 I/O Commands 00:20:23.354 ------------ 00:20:23.354 Flush (00h): Supported LBA-Change 00:20:23.354 Write (01h): Supported LBA-Change 00:20:23.354 Read (02h): Supported 00:20:23.354 Compare (05h): Supported 00:20:23.354 Write Zeroes (08h): Supported LBA-Change 00:20:23.354 Dataset Management (09h): Supported LBA-Change 00:20:23.354 Copy (19h): Supported LBA-Change 00:20:23.354 Unknown (79h): Supported LBA-Change 00:20:23.354 Unknown (7Ah): Supported 00:20:23.354 00:20:23.354 Error Log 00:20:23.354 ========= 00:20:23.354 00:20:23.354 Arbitration 00:20:23.354 =========== 00:20:23.354 Arbitration Burst: 1 00:20:23.354 00:20:23.354 Power Management 00:20:23.354 ================ 00:20:23.354 Number of Power States: 1 00:20:23.354 Current Power State: Power State #0 00:20:23.354 Power State #0: 00:20:23.354 Max Power: 0.00 W 00:20:23.354 Non-Operational State: Operational 00:20:23.354 Entry Latency: Not Reported 00:20:23.354 Exit Latency: Not Reported 00:20:23.354 Relative Read Throughput: 0 00:20:23.354 Relative Read Latency: 0 00:20:23.354 Relative Write Throughput: 0 00:20:23.354 Relative Write Latency: 0 00:20:23.354 Idle Power: Not Reported 00:20:23.354 Active Power: Not Reported 00:20:23.354 Non-Operational Permissive Mode: Not Supported 00:20:23.354 00:20:23.354 Health Information 00:20:23.354 ================== 00:20:23.354 Critical Warnings: 00:20:23.354 Available Spare Space: OK 00:20:23.354 Temperature: OK 00:20:23.354 Device Reliability: OK 00:20:23.354 Read Only: No 00:20:23.354 Volatile Memory Backup: OK 00:20:23.354 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:23.354 Temperature Threshold: [2024-05-13 18:33:39.045354] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.354 [2024-05-13 18:33:39.045358] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.045362] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa03190) on tqpair=0x9ba280 00:20:23.354 [2024-05-13 18:33:39.045377] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.354 [2024-05-13 18:33:39.045384] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.354 [2024-05-13 18:33:39.045388] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.045394] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa032f0) on tqpair=0x9ba280 00:20:23.354 [2024-05-13 18:33:39.045540] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.045552] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x9ba280) 00:20:23.354 [2024-05-13 18:33:39.045565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.354 [2024-05-13 18:33:39.049619] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa032f0, cid 7, qid 0 00:20:23.354 [2024-05-13 18:33:39.049709] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.354 [2024-05-13 18:33:39.049722] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.354 [2024-05-13 18:33:39.049729] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.049737] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa032f0) on tqpair=0x9ba280 00:20:23.354 [2024-05-13 18:33:39.049796] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:23.354 [2024-05-13 18:33:39.049827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.354 [2024-05-13 18:33:39.049840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.354 [2024-05-13 18:33:39.049850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.354 [2024-05-13 18:33:39.049858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:23.354 [2024-05-13 18:33:39.049875] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.049883] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.049889] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.354 [2024-05-13 18:33:39.049902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.354 [2024-05-13 18:33:39.049936] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.354 [2024-05-13 18:33:39.050190] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.354 [2024-05-13 18:33:39.050209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.354 [2024-05-13 18:33:39.050214] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.050218] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.354 [2024-05-13 18:33:39.050228] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.050232] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.050236] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.354 [2024-05-13 18:33:39.050245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.354 [2024-05-13 18:33:39.050275] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.354 [2024-05-13 18:33:39.050645] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.354 [2024-05-13 18:33:39.050664] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.354 [2024-05-13 18:33:39.050669] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.354 [2024-05-13 18:33:39.050674] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.050680] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:23.355 [2024-05-13 18:33:39.050685] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:23.355 [2024-05-13 18:33:39.050697] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.050702] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.050706] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.050714] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.050785] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.051098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.051116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.051121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051126] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.051144] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051153] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051157] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.051166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.051197] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.051520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.051537] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.051543] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051547] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.051559] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051568] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.051592] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.051618] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.051866] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.051887] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.051895] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051902] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.051915] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051921] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.051925] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.051933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.051960] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.052373] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.052391] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.052396] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.052400] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.052412] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.052417] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.052421] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.052429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.052452] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.052515] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.052525] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.052531] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.052538] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.052555] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.052564] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.052587] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.052597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.052621] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.052988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.053007] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.053012] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053016] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.053029] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053034] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053038] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.053046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.053068] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.053287] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.053305] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.053311] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053319] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.053335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053345] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.053353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.053385] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.053790] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.053809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.053814] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053818] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.053830] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053835] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.053839] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.355 [2024-05-13 18:33:39.053847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.355 [2024-05-13 18:33:39.053873] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.355 [2024-05-13 18:33:39.054135] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.355 [2024-05-13 18:33:39.054158] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.355 [2024-05-13 18:33:39.054165] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.054169] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.355 [2024-05-13 18:33:39.054182] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.054187] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.355 [2024-05-13 18:33:39.054191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.054199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.054224] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.054592] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.054610] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.054614] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.054619] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.054631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.054636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.054640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.054648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.054687] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.054747] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.054754] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.054758] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.054763] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.054779] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.054786] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.054792] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.054804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.054835] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.055185] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.055205] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.055212] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.055219] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.055234] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.055240] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.055244] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.055252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.055279] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.055609] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.055627] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.055631] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.055636] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.055648] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.055653] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.055657] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.055665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.055688] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.055955] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.055975] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.055983] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.055990] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.056003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056008] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056012] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.056019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.056046] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.056383] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.056400] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.056405] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056409] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.056421] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056426] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056430] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.056438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.056460] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.056756] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.056778] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.056783] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056788] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.056800] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056805] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.056809] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.056817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.056848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.356 [2024-05-13 18:33:39.057120] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.356 [2024-05-13 18:33:39.057144] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.356 [2024-05-13 18:33:39.057150] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.057154] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.356 [2024-05-13 18:33:39.057167] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.057172] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.356 [2024-05-13 18:33:39.057176] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.356 [2024-05-13 18:33:39.057183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.356 [2024-05-13 18:33:39.057208] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.357 [2024-05-13 18:33:39.057481] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.357 [2024-05-13 18:33:39.057500] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.357 [2024-05-13 18:33:39.057505] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.357 [2024-05-13 18:33:39.057510] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.357 [2024-05-13 18:33:39.057522] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.357 [2024-05-13 18:33:39.057527] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.357 [2024-05-13 18:33:39.057531] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.357 [2024-05-13 18:33:39.057539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.357 [2024-05-13 18:33:39.057563] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.357 [2024-05-13 18:33:39.061603] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.357 [2024-05-13 18:33:39.061616] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.357 [2024-05-13 18:33:39.061620] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.357 [2024-05-13 18:33:39.061624] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.357 [2024-05-13 18:33:39.061639] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:23.357 [2024-05-13 18:33:39.061645] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:23.357 [2024-05-13 18:33:39.061649] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9ba280) 00:20:23.357 [2024-05-13 18:33:39.061658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:23.357 [2024-05-13 18:33:39.061686] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa02d70, cid 3, qid 0 00:20:23.357 [2024-05-13 18:33:39.061935] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:23.357 [2024-05-13 18:33:39.061955] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:23.357 [2024-05-13 18:33:39.061962] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:23.357 [2024-05-13 18:33:39.061970] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa02d70) on tqpair=0x9ba280 00:20:23.357 [2024-05-13 18:33:39.061983] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 11 milliseconds 00:20:23.357 0 Kelvin (-273 Celsius) 00:20:23.357 Available Spare: 0% 00:20:23.357 Available Spare Threshold: 0% 00:20:23.357 Life Percentage Used: 0% 00:20:23.357 Data Units Read: 0 00:20:23.357 Data Units Written: 0 00:20:23.357 Host Read Commands: 0 00:20:23.357 Host Write Commands: 0 00:20:23.357 Controller Busy Time: 0 minutes 00:20:23.357 Power Cycles: 0 00:20:23.357 Power On Hours: 0 hours 00:20:23.357 Unsafe Shutdowns: 0 00:20:23.357 Unrecoverable Media Errors: 0 00:20:23.357 Lifetime Error Log Entries: 0 00:20:23.357 Warning Temperature Time: 0 minutes 00:20:23.357 Critical Temperature Time: 0 minutes 00:20:23.357 00:20:23.357 Number of Queues 00:20:23.357 ================ 00:20:23.357 Number of I/O Submission Queues: 127 00:20:23.357 Number of I/O Completion Queues: 127 00:20:23.357 00:20:23.357 Active Namespaces 00:20:23.357 ================= 00:20:23.357 Namespace ID:1 00:20:23.357 Error Recovery Timeout: Unlimited 00:20:23.357 Command Set Identifier: NVM (00h) 00:20:23.357 Deallocate: Supported 00:20:23.357 Deallocated/Unwritten Error: Not Supported 00:20:23.357 Deallocated Read Value: Unknown 00:20:23.357 Deallocate in Write Zeroes: Not Supported 00:20:23.357 Deallocated Guard Field: 0xFFFF 00:20:23.357 Flush: Supported 00:20:23.357 Reservation: Supported 00:20:23.357 Namespace Sharing Capabilities: Multiple Controllers 00:20:23.357 Size (in LBAs): 131072 (0GiB) 00:20:23.357 Capacity (in LBAs): 131072 (0GiB) 00:20:23.357 Utilization (in LBAs): 131072 (0GiB) 00:20:23.357 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:23.357 EUI64: ABCDEF0123456789 00:20:23.357 UUID: 6697e044-1ea4-4655-baa8-3eea7c3c7e34 00:20:23.357 Thin Provisioning: Not Supported 00:20:23.357 Per-NS Atomic Units: Yes 00:20:23.357 Atomic Boundary Size (Normal): 0 00:20:23.357 Atomic Boundary Size (PFail): 0 00:20:23.357 Atomic Boundary Offset: 0 00:20:23.357 Maximum Single Source Range Length: 65535 00:20:23.357 Maximum Copy Length: 65535 00:20:23.357 Maximum Source Range Count: 1 00:20:23.357 NGUID/EUI64 Never Reused: No 00:20:23.357 Namespace Write Protected: No 00:20:23.357 Number of LBA Formats: 1 00:20:23.357 Current LBA Format: LBA Format #00 00:20:23.357 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:23.357 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:23.357 rmmod nvme_tcp 00:20:23.357 rmmod nvme_fabrics 00:20:23.357 rmmod nvme_keyring 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 88246 ']' 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 88246 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 88246 ']' 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 88246 00:20:23.357 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:20:23.358 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:23.358 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88246 00:20:23.358 killing process with pid 88246 00:20:23.358 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:23.358 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:23.358 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88246' 00:20:23.358 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 88246 00:20:23.358 [2024-05-13 18:33:39.212730] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:20:23.358 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 88246 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:23.938 00:20:23.938 real 0m2.821s 00:20:23.938 user 0m7.646s 00:20:23.938 sys 0m0.679s 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:23.938 18:33:39 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 ************************************ 00:20:23.938 END TEST nvmf_identify 00:20:23.938 ************************************ 00:20:23.938 18:33:39 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:23.938 18:33:39 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:23.938 18:33:39 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:23.938 18:33:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:23.938 ************************************ 00:20:23.938 START TEST nvmf_perf 00:20:23.938 ************************************ 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:23.938 * Looking for test storage... 00:20:23.938 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@432 -- # nvmf_veth_init 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:23.938 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:23.939 Cannot find device "nvmf_tgt_br" 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@155 -- # true 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:23.939 Cannot find device "nvmf_tgt_br2" 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@156 -- # true 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:23.939 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:24.198 Cannot find device "nvmf_tgt_br" 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@158 -- # true 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:24.198 Cannot find device "nvmf_tgt_br2" 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@159 -- # true 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:24.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:24.198 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:24.198 18:33:39 nvmf_tcp.nvmf_perf -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:24.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:20:24.198 00:20:24.198 --- 10.0.0.2 ping statistics --- 00:20:24.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.198 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:20:24.198 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:24.456 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:24.456 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:20:24.456 00:20:24.456 --- 10.0.0.3 ping statistics --- 00:20:24.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.456 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:24.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:20:24.456 00:20:24.456 --- 10.0.0.1 ping statistics --- 00:20:24.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.456 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@433 -- # return 0 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=88471 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 88471 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 88471 ']' 00:20:24.456 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.457 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:24.457 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.457 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:24.457 18:33:40 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:24.457 [2024-05-13 18:33:40.226382] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:20:24.457 [2024-05-13 18:33:40.226471] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.457 [2024-05-13 18:33:40.364894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:24.715 [2024-05-13 18:33:40.491129] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.715 [2024-05-13 18:33:40.491199] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.715 [2024-05-13 18:33:40.491212] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.715 [2024-05-13 18:33:40.491221] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.715 [2024-05-13 18:33:40.491228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.715 [2024-05-13 18:33:40.491517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.715 [2024-05-13 18:33:40.492807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:24.715 [2024-05-13 18:33:40.492887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:24.715 [2024-05-13 18:33:40.492892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.281 18:33:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:25.281 18:33:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:20:25.281 18:33:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:25.281 18:33:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.281 18:33:41 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:25.539 18:33:41 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.539 18:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:25.539 18:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:25.797 18:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:25.797 18:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:26.055 18:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:26.055 18:33:41 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:26.314 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:26.314 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:26.314 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:26.314 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:26.314 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:26.572 [2024-05-13 18:33:42.387339] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.572 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.831 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:26.831 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.090 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:27.090 18:33:42 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:27.348 18:33:43 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:27.607 [2024-05-13 18:33:43.480398] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:20:27.607 [2024-05-13 18:33:43.481371] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:27.607 18:33:43 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:27.866 18:33:43 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:27.866 18:33:43 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:27.866 18:33:43 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:27.866 18:33:43 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:28.871 Initializing NVMe Controllers 00:20:28.871 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:20:28.871 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:20:28.871 Initialization complete. Launching workers. 00:20:28.871 ======================================================== 00:20:28.871 Latency(us) 00:20:28.871 Device Information : IOPS MiB/s Average min max 00:20:28.871 PCIE (0000:00:10.0) NSID 1 from core 0: 24309.98 94.96 1315.97 366.43 8010.91 00:20:28.871 ======================================================== 00:20:28.871 Total : 24309.98 94.96 1315.97 366.43 8010.91 00:20:28.871 00:20:29.129 18:33:44 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:30.502 Initializing NVMe Controllers 00:20:30.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:30.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:30.502 Initialization complete. Launching workers. 00:20:30.502 ======================================================== 00:20:30.502 Latency(us) 00:20:30.502 Device Information : IOPS MiB/s Average min max 00:20:30.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2595.52 10.14 384.97 142.35 5174.04 00:20:30.502 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.50 0.49 8095.04 6024.76 12075.01 00:20:30.502 ======================================================== 00:20:30.502 Total : 2720.02 10.63 737.86 142.35 12075.01 00:20:30.502 00:20:30.502 18:33:46 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:31.944 Initializing NVMe Controllers 00:20:31.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:31.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:31.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:31.944 Initialization complete. Launching workers. 00:20:31.944 ======================================================== 00:20:31.944 Latency(us) 00:20:31.944 Device Information : IOPS MiB/s Average min max 00:20:31.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7433.35 29.04 4304.39 771.02 9282.37 00:20:31.944 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2686.09 10.49 12032.29 5049.12 20171.05 00:20:31.944 ======================================================== 00:20:31.944 Total : 10119.44 39.53 6355.67 771.02 20171.05 00:20:31.944 00:20:31.944 18:33:47 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:31.944 18:33:47 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.471 Initializing NVMe Controllers 00:20:34.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.471 Controller IO queue size 128, less than required. 00:20:34.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.471 Controller IO queue size 128, less than required. 00:20:34.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.471 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:34.471 Initialization complete. Launching workers. 00:20:34.471 ======================================================== 00:20:34.471 Latency(us) 00:20:34.471 Device Information : IOPS MiB/s Average min max 00:20:34.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1211.49 302.87 107743.59 65587.63 206742.35 00:20:34.471 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 592.75 148.19 229574.08 85486.30 361786.14 00:20:34.471 ======================================================== 00:20:34.471 Total : 1804.25 451.06 147768.79 65587.63 361786.14 00:20:34.471 00:20:34.471 18:33:50 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:34.471 Initializing NVMe Controllers 00:20:34.471 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.471 Controller IO queue size 128, less than required. 00:20:34.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.471 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:34.471 Controller IO queue size 128, less than required. 00:20:34.471 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:34.471 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:34.471 WARNING: Some requested NVMe devices were skipped 00:20:34.471 No valid NVMe controllers or AIO or URING devices found 00:20:34.471 18:33:50 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:37.001 Initializing NVMe Controllers 00:20:37.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.001 Controller IO queue size 128, less than required. 00:20:37.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.001 Controller IO queue size 128, less than required. 00:20:37.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:37.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:37.001 Initialization complete. Launching workers. 00:20:37.001 00:20:37.001 ==================== 00:20:37.001 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:37.001 TCP transport: 00:20:37.001 polls: 7589 00:20:37.001 idle_polls: 4109 00:20:37.001 sock_completions: 3480 00:20:37.001 nvme_completions: 4333 00:20:37.001 submitted_requests: 6466 00:20:37.001 queued_requests: 1 00:20:37.001 00:20:37.001 ==================== 00:20:37.001 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:37.001 TCP transport: 00:20:37.001 polls: 7808 00:20:37.001 idle_polls: 4624 00:20:37.001 sock_completions: 3184 00:20:37.002 nvme_completions: 6409 00:20:37.002 submitted_requests: 9648 00:20:37.002 queued_requests: 1 00:20:37.002 ======================================================== 00:20:37.002 Latency(us) 00:20:37.002 Device Information : IOPS MiB/s Average min max 00:20:37.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1080.85 270.21 121580.76 73031.27 211912.59 00:20:37.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1598.82 399.71 80334.77 49488.25 138546.86 00:20:37.002 ======================================================== 00:20:37.002 Total : 2679.67 669.92 96971.42 49488.25 211912.59 00:20:37.002 00:20:37.002 18:33:52 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:37.002 18:33:52 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.261 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:37.261 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:20:37.261 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:37.520 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=965c864b-5318-4b85-a19e-48992a87319e 00:20:37.520 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 965c864b-5318-4b85-a19e-48992a87319e 00:20:37.520 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=965c864b-5318-4b85-a19e-48992a87319e 00:20:37.520 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:20:37.520 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:20:37.520 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:20:37.520 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:20:37.778 { 00:20:37.778 "base_bdev": "Nvme0n1", 00:20:37.778 "block_size": 4096, 00:20:37.778 "cluster_size": 4194304, 00:20:37.778 "free_clusters": 1278, 00:20:37.778 "name": "lvs_0", 00:20:37.778 "total_data_clusters": 1278, 00:20:37.778 "uuid": "965c864b-5318-4b85-a19e-48992a87319e" 00:20:37.778 } 00:20:37.778 ]' 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="965c864b-5318-4b85-a19e-48992a87319e") .free_clusters' 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1278 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="965c864b-5318-4b85-a19e-48992a87319e") .cluster_size' 00:20:37.778 5112 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5112 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5112 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:37.778 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 965c864b-5318-4b85-a19e-48992a87319e lbd_0 5112 00:20:38.034 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=b41cc125-f932-4933-a98e-106ee0af4a56 00:20:38.034 18:33:53 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore b41cc125-f932-4933-a98e-106ee0af4a56 lvs_n_0 00:20:38.599 18:33:54 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d4504c0f-e784-4f0a-84ae-5135fb15075e 00:20:38.599 18:33:54 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d4504c0f-e784-4f0a-84ae-5135fb15075e 00:20:38.599 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1360 -- # local lvs_uuid=d4504c0f-e784-4f0a-84ae-5135fb15075e 00:20:38.599 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1361 -- # local lvs_info 00:20:38.599 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1362 -- # local fc 00:20:38.599 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1363 -- # local cs 00:20:38.599 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:38.857 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:20:38.857 { 00:20:38.857 "base_bdev": "Nvme0n1", 00:20:38.857 "block_size": 4096, 00:20:38.857 "cluster_size": 4194304, 00:20:38.857 "free_clusters": 0, 00:20:38.857 "name": "lvs_0", 00:20:38.857 "total_data_clusters": 1278, 00:20:38.857 "uuid": "965c864b-5318-4b85-a19e-48992a87319e" 00:20:38.857 }, 00:20:38.857 { 00:20:38.857 "base_bdev": "b41cc125-f932-4933-a98e-106ee0af4a56", 00:20:38.857 "block_size": 4096, 00:20:38.857 "cluster_size": 4194304, 00:20:38.857 "free_clusters": 1276, 00:20:38.857 "name": "lvs_n_0", 00:20:38.857 "total_data_clusters": 1276, 00:20:38.857 "uuid": "d4504c0f-e784-4f0a-84ae-5135fb15075e" 00:20:38.857 } 00:20:38.857 ]' 00:20:38.857 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="d4504c0f-e784-4f0a-84ae-5135fb15075e") .free_clusters' 00:20:38.857 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # fc=1276 00:20:38.857 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="d4504c0f-e784-4f0a-84ae-5135fb15075e") .cluster_size' 00:20:38.857 5104 00:20:38.857 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # cs=4194304 00:20:38.857 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # free_mb=5104 00:20:38.857 18:33:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # echo 5104 00:20:38.858 18:33:54 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:38.858 18:33:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d4504c0f-e784-4f0a-84ae-5135fb15075e lbd_nest_0 5104 00:20:39.117 18:33:54 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=d8d5e834-ea6e-4cfc-89cb-ec8b3bd58cfd 00:20:39.117 18:33:54 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.376 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:39.376 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d8d5e834-ea6e-4cfc-89cb-ec8b3bd58cfd 00:20:39.634 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.893 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:39.893 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:39.893 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:39.893 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:39.893 18:33:55 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.151 Initializing NVMe Controllers 00:20:40.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.151 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:40.151 WARNING: Some requested NVMe devices were skipped 00:20:40.151 No valid NVMe controllers or AIO or URING devices found 00:20:40.151 18:33:56 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:40.151 18:33:56 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.352 Initializing NVMe Controllers 00:20:52.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.352 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:52.352 Initialization complete. Launching workers. 00:20:52.352 ======================================================== 00:20:52.352 Latency(us) 00:20:52.352 Device Information : IOPS MiB/s Average min max 00:20:52.352 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 971.80 121.47 1028.26 343.87 10091.88 00:20:52.352 ======================================================== 00:20:52.352 Total : 971.80 121.47 1028.26 343.87 10091.88 00:20:52.352 00:20:52.352 18:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:52.352 18:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:52.352 18:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:52.352 Initializing NVMe Controllers 00:20:52.352 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.352 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:52.352 WARNING: Some requested NVMe devices were skipped 00:20:52.352 No valid NVMe controllers or AIO or URING devices found 00:20:52.352 18:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:52.352 18:34:06 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.322 Initializing NVMe Controllers 00:21:02.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.322 Initialization complete. Launching workers. 00:21:02.322 ======================================================== 00:21:02.322 Latency(us) 00:21:02.322 Device Information : IOPS MiB/s Average min max 00:21:02.322 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 980.68 122.58 32662.50 8073.11 255888.73 00:21:02.322 ======================================================== 00:21:02.322 Total : 980.68 122.58 32662.50 8073.11 255888.73 00:21:02.322 00:21:02.322 18:34:16 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:02.322 18:34:16 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:02.322 18:34:16 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:02.322 Initializing NVMe Controllers 00:21:02.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.322 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:02.322 WARNING: Some requested NVMe devices were skipped 00:21:02.322 No valid NVMe controllers or AIO or URING devices found 00:21:02.322 18:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:02.323 18:34:17 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:12.288 Initializing NVMe Controllers 00:21:12.288 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:12.288 Controller IO queue size 128, less than required. 00:21:12.288 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:12.288 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:12.288 Initialization complete. Launching workers. 00:21:12.288 ======================================================== 00:21:12.288 Latency(us) 00:21:12.288 Device Information : IOPS MiB/s Average min max 00:21:12.288 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3673.39 459.17 34878.14 8695.17 81615.46 00:21:12.288 ======================================================== 00:21:12.288 Total : 3673.39 459.17 34878.14 8695.17 81615.46 00:21:12.288 00:21:12.288 18:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:12.288 18:34:27 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete d8d5e834-ea6e-4cfc-89cb-ec8b3bd58cfd 00:21:12.288 18:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:12.546 18:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b41cc125-f932-4933-a98e-106ee0af4a56 00:21:12.804 18:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:13.061 rmmod nvme_tcp 00:21:13.061 rmmod nvme_fabrics 00:21:13.061 rmmod nvme_keyring 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 88471 ']' 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 88471 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 88471 ']' 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 88471 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:21:13.061 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:13.062 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88471 00:21:13.062 killing process with pid 88471 00:21:13.062 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:13.062 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:13.062 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88471' 00:21:13.062 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 88471 00:21:13.062 [2024-05-13 18:34:29.000584] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:13.062 18:34:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 88471 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:14.964 ************************************ 00:21:14.964 END TEST nvmf_perf 00:21:14.964 ************************************ 00:21:14.964 00:21:14.964 real 0m50.987s 00:21:14.964 user 3m12.277s 00:21:14.964 sys 0m11.131s 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:14.964 18:34:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:14.964 18:34:30 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:14.964 18:34:30 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:14.964 18:34:30 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:14.964 18:34:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:14.964 ************************************ 00:21:14.964 START TEST nvmf_fio_host 00:21:14.964 ************************************ 00:21:14.964 18:34:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:14.964 * Looking for test storage... 00:21:14.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:14.964 18:34:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.964 18:34:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.964 18:34:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.964 18:34:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:14.965 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:15.224 Cannot find device "nvmf_tgt_br" 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@155 -- # true 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:15.224 Cannot find device "nvmf_tgt_br2" 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@156 -- # true 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:15.224 Cannot find device "nvmf_tgt_br" 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@158 -- # true 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:15.224 Cannot find device "nvmf_tgt_br2" 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@159 -- # true 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:15.224 18:34:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:15.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:15.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:15.224 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:15.484 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.484 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:21:15.484 00:21:15.484 --- 10.0.0.2 ping statistics --- 00:21:15.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.484 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:15.484 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:15.484 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:21:15.484 00:21:15.484 --- 10.0.0.3 ping statistics --- 00:21:15.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.484 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:15.484 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.484 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:21:15.484 00:21:15.484 --- 10.0.0.1 ping statistics --- 00:21:15.484 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.484 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@433 -- # return 0 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=89424 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:15.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 89424 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 89424 ']' 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:15.484 18:34:31 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.484 [2024-05-13 18:34:31.300101] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:15.484 [2024-05-13 18:34:31.300402] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.743 [2024-05-13 18:34:31.443677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.743 [2024-05-13 18:34:31.562012] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.743 [2024-05-13 18:34:31.562334] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.743 [2024-05-13 18:34:31.562362] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.743 [2024-05-13 18:34:31.562376] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.743 [2024-05-13 18:34:31.562387] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.743 [2024-05-13 18:34:31.562534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.743 [2024-05-13 18:34:31.563068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.743 [2024-05-13 18:34:31.563238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.743 [2024-05-13 18:34:31.563245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.678 [2024-05-13 18:34:32.286628] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.678 Malloc1 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.678 [2024-05-13 18:34:32.393394] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:16.678 [2024-05-13 18:34:32.393685] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:16.678 18:34:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.678 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:16.678 fio-3.35 00:21:16.678 Starting 1 thread 00:21:19.209 00:21:19.209 test: (groupid=0, jobs=1): err= 0: pid=89503: Mon May 13 18:34:34 2024 00:21:19.209 read: IOPS=9346, BW=36.5MiB/s (38.3MB/s)(73.2MiB/2006msec) 00:21:19.209 slat (usec): min=2, max=361, avg= 2.73, stdev= 3.42 00:21:19.209 clat (usec): min=3047, max=12530, avg=7152.17, stdev=516.79 00:21:19.209 lat (usec): min=3084, max=12533, avg=7154.90, stdev=516.49 00:21:19.209 clat percentiles (usec): 00:21:19.209 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 6587], 20.00th=[ 6783], 00:21:19.209 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7111], 60.00th=[ 7242], 00:21:19.209 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:21:19.209 | 99.00th=[ 8455], 99.50th=[ 8848], 99.90th=[11469], 99.95th=[11994], 00:21:19.209 | 99.99th=[12518] 00:21:19.209 bw ( KiB/s): min=36632, max=37952, per=99.94%, avg=37364.00, stdev=546.45, samples=4 00:21:19.209 iops : min= 9158, max= 9488, avg=9341.00, stdev=136.61, samples=4 00:21:19.209 write: IOPS=9351, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2006msec); 0 zone resets 00:21:19.209 slat (usec): min=2, max=298, avg= 2.83, stdev= 2.46 00:21:19.209 clat (usec): min=2540, max=12076, avg=6482.12, stdev=458.95 00:21:19.209 lat (usec): min=2554, max=12078, avg=6484.95, stdev=458.77 00:21:19.209 clat percentiles (usec): 00:21:19.209 | 1.00th=[ 5407], 5.00th=[ 5866], 10.00th=[ 5997], 20.00th=[ 6194], 00:21:19.209 | 30.00th=[ 6325], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6587], 00:21:19.209 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7111], 00:21:19.209 | 99.00th=[ 7439], 99.50th=[ 7898], 99.90th=[10421], 99.95th=[10945], 00:21:19.209 | 99.99th=[11994] 00:21:19.209 bw ( KiB/s): min=37128, max=37568, per=99.98%, avg=37398.00, stdev=194.14, samples=4 00:21:19.209 iops : min= 9282, max= 9392, avg=9349.50, stdev=48.54, samples=4 00:21:19.209 lat (msec) : 4=0.17%, 10=99.66%, 20=0.17% 00:21:19.209 cpu : usr=63.44%, sys=25.84%, ctx=10, majf=0, minf=5 00:21:19.209 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:19.209 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.209 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:19.209 issued rwts: total=18749,18759,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.209 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:19.209 00:21:19.209 Run status group 0 (all jobs): 00:21:19.209 READ: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.2MiB (76.8MB), run=2006-2006msec 00:21:19.209 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.8MB), run=2006-2006msec 00:21:19.209 18:34:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:19.210 18:34:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:19.210 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:19.210 fio-3.35 00:21:19.210 Starting 1 thread 00:21:21.741 00:21:21.741 test: (groupid=0, jobs=1): err= 0: pid=89546: Mon May 13 18:34:37 2024 00:21:21.741 read: IOPS=8316, BW=130MiB/s (136MB/s)(261MiB/2005msec) 00:21:21.741 slat (usec): min=3, max=117, avg= 3.89, stdev= 1.64 00:21:21.741 clat (usec): min=2112, max=17320, avg=9178.87, stdev=2149.95 00:21:21.741 lat (usec): min=2116, max=17324, avg=9182.76, stdev=2149.96 00:21:21.741 clat percentiles (usec): 00:21:21.741 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7242], 00:21:21.741 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9765], 00:21:21.741 | 70.00th=[10421], 80.00th=[11207], 90.00th=[11731], 95.00th=[12649], 00:21:21.741 | 99.00th=[14615], 99.50th=[15008], 99.90th=[15795], 99.95th=[16057], 00:21:21.741 | 99.99th=[16909] 00:21:21.741 bw ( KiB/s): min=60480, max=74880, per=51.13%, avg=68040.00, stdev=7680.46, samples=4 00:21:21.741 iops : min= 3780, max= 4680, avg=4252.50, stdev=480.03, samples=4 00:21:21.741 write: IOPS=4939, BW=77.2MiB/s (80.9MB/s)(139MiB/1797msec); 0 zone resets 00:21:21.741 slat (usec): min=36, max=286, avg=38.02, stdev= 5.39 00:21:21.741 clat (usec): min=3740, max=18957, avg=11072.13, stdev=1853.96 00:21:21.741 lat (usec): min=3777, max=18996, avg=11110.14, stdev=1853.72 00:21:21.741 clat percentiles (usec): 00:21:21.741 | 1.00th=[ 7177], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9503], 00:21:21.741 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:21:21.741 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13829], 95.00th=[14484], 00:21:21.741 | 99.00th=[15533], 99.50th=[15926], 99.90th=[16450], 99.95th=[17433], 00:21:21.741 | 99.99th=[19006] 00:21:21.741 bw ( KiB/s): min=60896, max=78784, per=89.51%, avg=70736.00, stdev=9016.60, samples=4 00:21:21.741 iops : min= 3806, max= 4924, avg=4421.00, stdev=563.54, samples=4 00:21:21.741 lat (msec) : 4=0.20%, 10=51.57%, 20=48.23% 00:21:21.741 cpu : usr=74.11%, sys=15.91%, ctx=11, majf=0, minf=16 00:21:21.741 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:21.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:21.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:21.741 issued rwts: total=16675,8876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:21.741 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:21.741 00:21:21.741 Run status group 0 (all jobs): 00:21:21.741 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=261MiB (273MB), run=2005-2005msec 00:21:21.741 WRITE: bw=77.2MiB/s (80.9MB/s), 77.2MiB/s-77.2MiB/s (80.9MB/s-80.9MB/s), io=139MiB (145MB), run=1797-1797msec 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # get_nvme_bdfs 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # bdfs=() 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1509 -- # local bdfs 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.741 Nvme0n1 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # ls_guid=ace7ec29-f6fe-4822-8271-ff7a863eee94 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # get_lvs_free_mb ace7ec29-f6fe-4822-8271-ff7a863eee94 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=ace7ec29-f6fe-4822-8271-ff7a863eee94 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:21:21.741 { 00:21:21.741 "base_bdev": "Nvme0n1", 00:21:21.741 "block_size": 4096, 00:21:21.741 "cluster_size": 1073741824, 00:21:21.741 "free_clusters": 4, 00:21:21.741 "name": "lvs_0", 00:21:21.741 "total_data_clusters": 4, 00:21:21.741 "uuid": "ace7ec29-f6fe-4822-8271-ff7a863eee94" 00:21:21.741 } 00:21:21.741 ]' 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="ace7ec29-f6fe-4822-8271-ff7a863eee94") .free_clusters' 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=4 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="ace7ec29-f6fe-4822-8271-ff7a863eee94") .cluster_size' 00:21:21.741 4096 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=1073741824 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4096 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4096 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:21.741 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 5bbfd712-2a12-490c-ab8a-b2d78ba55374 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:21.742 18:34:37 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:21.999 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:21.999 fio-3.35 00:21:21.999 Starting 1 thread 00:21:24.528 00:21:24.528 test: (groupid=0, jobs=1): err= 0: pid=89625: Mon May 13 18:34:40 2024 00:21:24.528 read: IOPS=6721, BW=26.3MiB/s (27.5MB/s)(52.7MiB/2008msec) 00:21:24.528 slat (usec): min=2, max=274, avg= 2.58, stdev= 2.98 00:21:24.528 clat (usec): min=3747, max=18430, avg=9987.49, stdev=832.74 00:21:24.528 lat (usec): min=3754, max=18432, avg=9990.07, stdev=832.62 00:21:24.528 clat percentiles (usec): 00:21:24.528 | 1.00th=[ 8225], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:21:24.528 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:21:24.528 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:21:24.528 | 99.00th=[11994], 99.50th=[12256], 99.90th=[14746], 99.95th=[16057], 00:21:24.528 | 99.99th=[18482] 00:21:24.528 bw ( KiB/s): min=25940, max=27368, per=99.88%, avg=26853.00, stdev=672.80, samples=4 00:21:24.528 iops : min= 6485, max= 6842, avg=6713.25, stdev=168.20, samples=4 00:21:24.528 write: IOPS=6726, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec); 0 zone resets 00:21:24.528 slat (usec): min=2, max=193, avg= 2.68, stdev= 1.88 00:21:24.528 clat (usec): min=1754, max=16132, avg=8966.06, stdev=769.02 00:21:24.528 lat (usec): min=1762, max=16134, avg=8968.74, stdev=768.93 00:21:24.528 clat percentiles (usec): 00:21:24.528 | 1.00th=[ 7308], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8356], 00:21:24.528 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:21:24.528 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10159], 00:21:24.528 | 99.00th=[10683], 99.50th=[10814], 99.90th=[13304], 99.95th=[15795], 00:21:24.528 | 99.99th=[16057] 00:21:24.528 bw ( KiB/s): min=26752, max=26978, per=99.88%, avg=26872.50, stdev=98.26, samples=4 00:21:24.528 iops : min= 6688, max= 6744, avg=6718.00, stdev=24.39, samples=4 00:21:24.528 lat (msec) : 2=0.01%, 4=0.07%, 10=72.43%, 20=27.50% 00:21:24.528 cpu : usr=69.76%, sys=23.07%, ctx=11, majf=0, minf=23 00:21:24.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:24.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:24.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:24.528 issued rwts: total=13497,13506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:24.528 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:24.528 00:21:24.528 Run status group 0 (all jobs): 00:21:24.528 READ: bw=26.3MiB/s (27.5MB/s), 26.3MiB/s-26.3MiB/s (27.5MB/s-27.5MB/s), io=52.7MiB (55.3MB), run=2008-2008msec 00:21:24.528 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.3MB), run=2008-2008msec 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@62 -- # ls_nested_guid=2c3dd933-a025-4c63-90be-637c57c8057c 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@63 -- # get_lvs_free_mb 2c3dd933-a025-4c63-90be-637c57c8057c 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # local lvs_uuid=2c3dd933-a025-4c63-90be-637c57c8057c 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1361 -- # local lvs_info 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local fc 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local cs 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # rpc_cmd bdev_lvol_get_lvstores 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.528 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # lvs_info='[ 00:21:24.528 { 00:21:24.528 "base_bdev": "Nvme0n1", 00:21:24.528 "block_size": 4096, 00:21:24.528 "cluster_size": 1073741824, 00:21:24.528 "free_clusters": 0, 00:21:24.528 "name": "lvs_0", 00:21:24.528 "total_data_clusters": 4, 00:21:24.528 "uuid": "ace7ec29-f6fe-4822-8271-ff7a863eee94" 00:21:24.528 }, 00:21:24.528 { 00:21:24.529 "base_bdev": "5bbfd712-2a12-490c-ab8a-b2d78ba55374", 00:21:24.529 "block_size": 4096, 00:21:24.529 "cluster_size": 4194304, 00:21:24.529 "free_clusters": 1022, 00:21:24.529 "name": "lvs_n_0", 00:21:24.529 "total_data_clusters": 1022, 00:21:24.529 "uuid": "2c3dd933-a025-4c63-90be-637c57c8057c" 00:21:24.529 } 00:21:24.529 ]' 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # jq '.[] | select(.uuid=="2c3dd933-a025-4c63-90be-637c57c8057c") .free_clusters' 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # fc=1022 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # jq '.[] | select(.uuid=="2c3dd933-a025-4c63-90be-637c57c8057c") .cluster_size' 00:21:24.529 4088 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # cs=4194304 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # free_mb=4088 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # echo 4088 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.529 1b2a2312-b78b-441f-b677-2ced6503dafd 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:24.529 18:34:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:24.529 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:24.529 fio-3.35 00:21:24.529 Starting 1 thread 00:21:27.116 00:21:27.116 test: (groupid=0, jobs=1): err= 0: pid=89678: Mon May 13 18:34:42 2024 00:21:27.116 read: IOPS=5988, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2008msec) 00:21:27.116 slat (usec): min=2, max=315, avg= 2.59, stdev= 3.60 00:21:27.116 clat (usec): min=4202, max=19213, avg=11236.87, stdev=961.38 00:21:27.116 lat (usec): min=4211, max=19216, avg=11239.46, stdev=961.09 00:21:27.116 clat percentiles (usec): 00:21:27.116 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:21:27.116 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11469], 00:21:27.116 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:21:27.116 | 99.00th=[13435], 99.50th=[13960], 99.90th=[17695], 99.95th=[19006], 00:21:27.116 | 99.99th=[19268] 00:21:27.116 bw ( KiB/s): min=23200, max=24296, per=99.72%, avg=23886.00, stdev=483.69, samples=4 00:21:27.116 iops : min= 5800, max= 6074, avg=5971.50, stdev=120.92, samples=4 00:21:27.116 write: IOPS=5967, BW=23.3MiB/s (24.4MB/s)(46.8MiB/2008msec); 0 zone resets 00:21:27.116 slat (usec): min=2, max=221, avg= 2.72, stdev= 2.34 00:21:27.116 clat (usec): min=2242, max=17580, avg=10077.68, stdev=859.87 00:21:27.116 lat (usec): min=2255, max=17583, avg=10080.40, stdev=859.69 00:21:27.116 clat percentiles (usec): 00:21:27.116 | 1.00th=[ 8225], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:21:27.116 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:21:27.116 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11338], 00:21:27.116 | 99.00th=[11994], 99.50th=[12125], 99.90th=[15664], 99.95th=[16188], 00:21:27.116 | 99.99th=[17433] 00:21:27.116 bw ( KiB/s): min=23648, max=24136, per=100.00%, avg=23874.00, stdev=212.80, samples=4 00:21:27.116 iops : min= 5912, max= 6034, avg=5968.50, stdev=53.20, samples=4 00:21:27.116 lat (msec) : 4=0.04%, 10=26.38%, 20=73.58% 00:21:27.116 cpu : usr=71.60%, sys=22.12%, ctx=20, majf=0, minf=23 00:21:27.116 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:27.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:27.116 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:27.116 issued rwts: total=12024,11983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:27.116 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:27.116 00:21:27.116 Run status group 0 (all jobs): 00:21:27.116 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.2MB), run=2008-2008msec 00:21:27.116 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.8MiB (49.1MB), run=2008-2008msec 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # sync 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.116 18:34:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:27.684 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.684 18:34:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:27.684 18:34:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:27.684 18:34:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:21:27.684 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.684 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:27.950 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.950 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:27.950 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.951 rmmod nvme_tcp 00:21:27.951 rmmod nvme_fabrics 00:21:27.951 rmmod nvme_keyring 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 89424 ']' 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 89424 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 89424 ']' 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 89424 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89424 00:21:27.951 killing process with pid 89424 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89424' 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 89424 00:21:27.951 [2024-05-13 18:34:43.692428] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:27.951 18:34:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 89424 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:28.214 ************************************ 00:21:28.214 END TEST nvmf_fio_host 00:21:28.214 ************************************ 00:21:28.214 00:21:28.214 real 0m13.270s 00:21:28.214 user 0m54.998s 00:21:28.214 sys 0m3.481s 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:28.214 18:34:44 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.214 18:34:44 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:28.214 18:34:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:28.214 18:34:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:28.214 18:34:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:28.214 ************************************ 00:21:28.214 START TEST nvmf_failover 00:21:28.214 ************************************ 00:21:28.214 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:28.474 * Looking for test storage... 00:21:28.474 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:28.474 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@432 -- # nvmf_veth_init 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:28.475 Cannot find device "nvmf_tgt_br" 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@155 -- # true 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:28.475 Cannot find device "nvmf_tgt_br2" 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@156 -- # true 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:28.475 Cannot find device "nvmf_tgt_br" 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@158 -- # true 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:28.475 Cannot find device "nvmf_tgt_br2" 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@159 -- # true 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:28.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@162 -- # true 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:28.475 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@163 -- # true 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:28.475 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:28.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:21:28.734 00:21:28.734 --- 10.0.0.2 ping statistics --- 00:21:28.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.734 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:28.734 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:28.734 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:21:28.734 00:21:28.734 --- 10.0.0.3 ping statistics --- 00:21:28.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.734 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:28.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:21:28.734 00:21:28.734 --- 10.0.0.1 ping statistics --- 00:21:28.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.734 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@433 -- # return 0 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=89895 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 89895 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 89895 ']' 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:28.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:28.734 18:34:44 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:28.734 [2024-05-13 18:34:44.612750] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:28.734 [2024-05-13 18:34:44.612855] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.993 [2024-05-13 18:34:44.753692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:28.993 [2024-05-13 18:34:44.888488] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.993 [2024-05-13 18:34:44.888785] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.993 [2024-05-13 18:34:44.889135] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.993 [2024-05-13 18:34:44.889648] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.993 [2024-05-13 18:34:44.889882] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.993 [2024-05-13 18:34:44.890187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:28.993 [2024-05-13 18:34:44.890302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:28.993 [2024-05-13 18:34:44.890320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.926 18:34:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:29.926 18:34:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:29.926 18:34:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.926 18:34:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:29.926 18:34:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:29.926 18:34:45 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.926 18:34:45 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:30.184 [2024-05-13 18:34:45.884008] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.184 18:34:45 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:30.443 Malloc0 00:21:30.443 18:34:46 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:30.702 18:34:46 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:30.960 18:34:46 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.218 [2024-05-13 18:34:47.047967] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:31.218 [2024-05-13 18:34:47.048228] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.218 18:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:31.475 [2024-05-13 18:34:47.284392] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:31.475 18:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:31.733 [2024-05-13 18:34:47.520586] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=90011 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 90011 /var/tmp/bdevperf.sock 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 90011 ']' 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.734 18:34:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:32.671 18:34:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:32.671 18:34:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:32.671 18:34:48 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.929 NVMe0n1 00:21:32.929 18:34:48 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:33.188 00:21:33.447 18:34:49 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=90060 00:21:33.447 18:34:49 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:33.447 18:34:49 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:34.382 18:34:50 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:34.641 [2024-05-13 18:34:50.426631] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426684] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426696] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426706] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426715] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426723] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426732] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426740] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426749] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426757] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426766] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426774] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426783] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426791] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426799] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426808] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426816] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426824] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426832] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426840] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426850] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426858] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426866] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426875] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426883] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426900] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.426909] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.427362] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.427390] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.427399] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.427408] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.427416] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.641 [2024-05-13 18:34:50.427425] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427434] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427442] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427450] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427459] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427467] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427475] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427483] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427594] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427606] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427614] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427697] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427710] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427718] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 [2024-05-13 18:34:50.427727] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224de30 is same with the state(5) to be set 00:21:34.642 18:34:50 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:37.934 18:34:53 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:37.934 00:21:37.934 18:34:53 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:38.191 [2024-05-13 18:34:54.099820] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.191 [2024-05-13 18:34:54.099878] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.191 [2024-05-13 18:34:54.099895] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.191 [2024-05-13 18:34:54.099904] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.191 [2024-05-13 18:34:54.099912] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.191 [2024-05-13 18:34:54.099922] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.191 [2024-05-13 18:34:54.099931] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099940] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099948] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099956] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099965] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099973] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099981] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099990] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.099998] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100006] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100015] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100023] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100031] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100039] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100048] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100059] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100071] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100079] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100088] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100097] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100105] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100114] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100122] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100130] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100138] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100147] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100155] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100163] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100173] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100181] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100190] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100199] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100207] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100215] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100224] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100233] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100241] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100249] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100257] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100265] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100274] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100282] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100290] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100298] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100306] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100314] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100322] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100329] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100337] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100345] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100353] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100361] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100369] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100377] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100385] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100393] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100401] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100409] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100416] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100425] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100433] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 [2024-05-13 18:34:54.100441] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x224e660 is same with the state(5) to be set 00:21:38.192 18:34:54 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:41.483 18:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.483 [2024-05-13 18:34:57.389617] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.483 18:34:57 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:42.872 18:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:42.872 [2024-05-13 18:34:58.707167] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707214] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707225] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707234] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707243] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707251] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707261] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707270] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707278] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707287] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707295] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707303] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707312] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707320] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707328] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707336] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707345] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707353] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707361] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707369] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707378] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707385] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 [2024-05-13 18:34:58.707394] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ff6160 is same with the state(5) to be set 00:21:42.872 18:34:58 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 90060 00:21:49.439 0 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 90011 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 90011 ']' 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 90011 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90011 00:21:49.439 killing process with pid 90011 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90011' 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 90011 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 90011 00:21:49.439 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:49.439 [2024-05-13 18:34:47.593621] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:49.439 [2024-05-13 18:34:47.593732] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90011 ] 00:21:49.439 [2024-05-13 18:34:47.728697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.439 [2024-05-13 18:34:47.845428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.439 Running I/O for 15 seconds... 00:21:49.439 [2024-05-13 18:34:50.428300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:87200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:87208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:87216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:87248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:87256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:87264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:87280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:87304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:87320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.439 [2024-05-13 18:34:50.428930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.439 [2024-05-13 18:34:50.428945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:87336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.428958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.428972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.428985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:87384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:87392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:87440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:87488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:87520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:87528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.440 [2024-05-13 18:34:50.429792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.429825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:87760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.429864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:87768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.429893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:87776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.429921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:87784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.429950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:87792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.429977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.429992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:87800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.430005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.430020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:87808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.430034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.430048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:87816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.430061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.430083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:87824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.430097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.440 [2024-05-13 18:34:50.430111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:87832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.440 [2024-05-13 18:34:50.430125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:87840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:87848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:87856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:87864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:87872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:87880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:87888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:87896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:87904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:87912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:87920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:87928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:87944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:87952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:87968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:87976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:87984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:87992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:88000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:88008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:88016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:88024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:88032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:88040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.430977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.430990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.441 [2024-05-13 18:34:50.431018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.441 [2024-05-13 18:34:50.431310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.441 [2024-05-13 18:34:50.431325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.431346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.431375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.431402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.431431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.431459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.431487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:50.431969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.431989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.432002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.432031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.432059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.432094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.432123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:50.432152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e2370 is same with the state(5) to be set 00:21:49.442 [2024-05-13 18:34:50.432183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.442 [2024-05-13 18:34:50.432194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.442 [2024-05-13 18:34:50.432210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:87744 len:8 PRP1 0x0 PRP2 0x0 00:21:49.442 [2024-05-13 18:34:50.432224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432282] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15e2370 was disconnected and freed. reset controller. 00:21:49.442 [2024-05-13 18:34:50.432302] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:49.442 [2024-05-13 18:34:50.432356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.442 [2024-05-13 18:34:50.432377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.442 [2024-05-13 18:34:50.432405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.442 [2024-05-13 18:34:50.432432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.442 [2024-05-13 18:34:50.432459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:50.432471] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.442 [2024-05-13 18:34:50.436345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.442 [2024-05-13 18:34:50.436386] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575080 (9): Bad file descriptor 00:21:49.442 [2024-05-13 18:34:50.473582] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.442 [2024-05-13 18:34:54.102632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:54.102687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:54.102714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.442 [2024-05-13 18:34:54.102760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.442 [2024-05-13 18:34:54.102778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.442 [2024-05-13 18:34:54.102792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.102807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.102821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.102836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.102849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.102864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.102877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.102895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.102915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.102931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.102945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.102959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.102972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.102987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:102264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:102272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:102280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:102288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:102312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:102320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:102328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:102352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:102360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:102368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:102376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:102400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:102416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.443 [2024-05-13 18:34:54.103620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.443 [2024-05-13 18:34:54.103635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:102432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:102440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.444 [2024-05-13 18:34:54.103973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.444 [2024-05-13 18:34:54.103995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:102520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:102528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:102536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:102544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:102560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.445 [2024-05-13 18:34:54.104276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.445 [2024-05-13 18:34:54.104310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.445 [2024-05-13 18:34:54.104338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.445 [2024-05-13 18:34:54.104366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.445 [2024-05-13 18:34:54.104394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.445 [2024-05-13 18:34:54.104422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.445 [2024-05-13 18:34:54.104450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:102600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:102608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:102616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:102624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:102672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:102688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:102704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:102712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:102720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.104980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.104995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:102728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.105008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.105030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.105045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.105060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.105074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.105089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.105102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.105117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:102760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.105130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.105145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:102768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.105158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.105173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.445 [2024-05-13 18:34:54.105186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.445 [2024-05-13 18:34:54.105201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.446 [2024-05-13 18:34:54.105440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102848 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102856 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102864 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102872 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102880 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102888 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102896 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102904 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102912 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105912] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102920 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.105957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.105967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.105977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102928 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.105990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106013] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102936 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102944 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102952 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102960 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102968 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102976 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102984 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.446 [2024-05-13 18:34:54.106348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102992 len:8 PRP1 0x0 PRP2 0x0 00:21:49.446 [2024-05-13 18:34:54.106361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.446 [2024-05-13 18:34:54.106374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.446 [2024-05-13 18:34:54.106383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103000 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103008 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103016 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103024 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103032 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103040 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106681] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103048 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103056 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103064 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106819] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103072 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106864] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102120 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102128 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.106956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.106970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.106979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.106989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102136 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.107002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.107024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.107034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102144 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.107047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107059] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.107069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.107079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102152 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.107091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.107114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.107124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102160 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.107136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.107159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.107168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102168 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.107181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.447 [2024-05-13 18:34:54.107205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.447 [2024-05-13 18:34:54.107221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102176 len:8 PRP1 0x0 PRP2 0x0 00:21:49.447 [2024-05-13 18:34:54.107235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107293] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15f1780 was disconnected and freed. reset controller. 00:21:49.447 [2024-05-13 18:34:54.107321] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:49.447 [2024-05-13 18:34:54.107378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.447 [2024-05-13 18:34:54.107411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.447 [2024-05-13 18:34:54.107441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.447 [2024-05-13 18:34:54.107468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.447 [2024-05-13 18:34:54.107494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:54.107507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.447 [2024-05-13 18:34:54.107542] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575080 (9): Bad file descriptor 00:21:49.447 [2024-05-13 18:34:54.111368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.447 [2024-05-13 18:34:54.145904] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.447 [2024-05-13 18:34:58.707646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.447 [2024-05-13 18:34:58.707698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:58.707727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.447 [2024-05-13 18:34:58.707742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:58.707758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.447 [2024-05-13 18:34:58.707771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.447 [2024-05-13 18:34:58.707787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:56808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.447 [2024-05-13 18:34:58.707800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.707815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.707828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.707843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:56824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.707858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.707873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.707886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.707901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.707914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.707953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.707968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.707983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:56856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.707996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:56864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:56880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:56904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:56912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:56928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:57000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:57016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:57040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:57056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.448 [2024-05-13 18:34:58.708928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.448 [2024-05-13 18:34:58.708984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.448 [2024-05-13 18:34:58.708999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:57520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:57528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:57560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:57568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:57624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:57096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:57104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:57112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:57120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:57136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:57144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.449 [2024-05-13 18:34:58.709865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.449 [2024-05-13 18:34:58.709979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.449 [2024-05-13 18:34:58.709995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:49.450 [2024-05-13 18:34:58.710521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:57192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:57216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:57224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:57232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:57240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:57248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:57272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:57288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.710982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.710997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.711010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.711024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.711038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.711053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.711066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.711080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.711093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.711108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.711121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.711136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.711149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.711164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.450 [2024-05-13 18:34:58.711177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.450 [2024-05-13 18:34:58.711192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:57416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:57424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.451 [2024-05-13 18:34:58.711444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711458] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e7260 is same with the state(5) to be set 00:21:49.451 [2024-05-13 18:34:58.711474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:49.451 [2024-05-13 18:34:58.711485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:49.451 [2024-05-13 18:34:58.711496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57432 len:8 PRP1 0x0 PRP2 0x0 00:21:49.451 [2024-05-13 18:34:58.711509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711566] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x15e7260 was disconnected and freed. reset controller. 00:21:49.451 [2024-05-13 18:34:58.711600] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:49.451 [2024-05-13 18:34:58.711654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.451 [2024-05-13 18:34:58.711675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.451 [2024-05-13 18:34:58.711702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.451 [2024-05-13 18:34:58.711730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:49.451 [2024-05-13 18:34:58.711756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:49.451 [2024-05-13 18:34:58.711780] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.451 [2024-05-13 18:34:58.715633] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.451 [2024-05-13 18:34:58.715671] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575080 (9): Bad file descriptor 00:21:49.451 [2024-05-13 18:34:58.749343] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:49.451 00:21:49.451 Latency(us) 00:21:49.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.451 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:49.451 Verification LBA range: start 0x0 length 0x4000 00:21:49.451 NVMe0n1 : 15.01 9166.25 35.81 213.56 0.00 13614.82 871.33 17754.30 00:21:49.451 =================================================================================================================== 00:21:49.451 Total : 9166.25 35.81 213.56 0.00 13614.82 871.33 17754.30 00:21:49.451 Received shutdown signal, test time was about 15.000000 seconds 00:21:49.451 00:21:49.451 Latency(us) 00:21:49.451 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.451 =================================================================================================================== 00:21:49.451 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=90260 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 90260 /var/tmp/bdevperf.sock 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:49.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 90260 ']' 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:49.451 18:35:04 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:49.710 18:35:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:49.710 18:35:05 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:21:49.710 18:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:49.968 [2024-05-13 18:35:05.741918] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:49.968 18:35:05 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:50.226 [2024-05-13 18:35:06.042204] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:50.226 18:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.484 NVMe0n1 00:21:50.484 18:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.050 00:21:51.050 18:35:06 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.309 00:21:51.309 18:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:51.309 18:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:51.567 18:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.825 18:35:07 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:55.105 18:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:55.105 18:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:55.105 18:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=90397 00:21:55.105 18:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.105 18:35:10 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 90397 00:21:56.482 0 00:21:56.482 18:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:56.482 [2024-05-13 18:35:04.591157] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:21:56.482 [2024-05-13 18:35:04.591273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90260 ] 00:21:56.482 [2024-05-13 18:35:04.732126] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.482 [2024-05-13 18:35:04.861714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.482 [2024-05-13 18:35:07.678004] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:56.482 [2024-05-13 18:35:07.678135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.482 [2024-05-13 18:35:07.678160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-05-13 18:35:07.678179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.482 [2024-05-13 18:35:07.678193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-05-13 18:35:07.678216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.482 [2024-05-13 18:35:07.678230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-05-13 18:35:07.678245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:56.482 [2024-05-13 18:35:07.678259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:56.482 [2024-05-13 18:35:07.678272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:56.482 [2024-05-13 18:35:07.678324] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:56.482 [2024-05-13 18:35:07.678357] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xced080 (9): Bad file descriptor 00:21:56.482 [2024-05-13 18:35:07.683560] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:56.482 Running I/O for 1 seconds... 00:21:56.482 00:21:56.482 Latency(us) 00:21:56.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.482 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:56.482 Verification LBA range: start 0x0 length 0x4000 00:21:56.482 NVMe0n1 : 1.00 8113.56 31.69 0.00 0.00 15693.41 2249.08 19184.17 00:21:56.482 =================================================================================================================== 00:21:56.482 Total : 8113.56 31.69 0.00 0.00 15693.41 2249.08 19184.17 00:21:56.482 18:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:56.482 18:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.482 18:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:56.749 18:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:56.749 18:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:57.014 18:35:12 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.272 18:35:13 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 90260 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 90260 ']' 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 90260 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90260 00:22:00.556 killing process with pid 90260 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90260' 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 90260 00:22:00.556 18:35:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 90260 00:22:00.814 18:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:00.814 18:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:01.072 18:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:01.072 18:35:16 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:01.072 18:35:17 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:01.072 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:01.072 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:22:01.072 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:01.072 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:22:01.072 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:01.072 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:01.072 rmmod nvme_tcp 00:22:01.330 rmmod nvme_fabrics 00:22:01.330 rmmod nvme_keyring 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 89895 ']' 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 89895 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 89895 ']' 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 89895 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89895 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:01.330 killing process with pid 89895 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89895' 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 89895 00:22:01.330 [2024-05-13 18:35:17.081550] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:01.330 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 89895 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:01.588 00:22:01.588 real 0m33.309s 00:22:01.588 user 2m9.868s 00:22:01.588 sys 0m4.712s 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:01.588 18:35:17 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:01.588 ************************************ 00:22:01.588 END TEST nvmf_failover 00:22:01.588 ************************************ 00:22:01.588 18:35:17 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:01.588 18:35:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:01.588 18:35:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:01.588 18:35:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.588 ************************************ 00:22:01.588 START TEST nvmf_host_discovery 00:22:01.588 ************************************ 00:22:01.588 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:01.588 * Looking for test storage... 00:22:01.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.846 18:35:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:01.847 Cannot find device "nvmf_tgt_br" 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@155 -- # true 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:01.847 Cannot find device "nvmf_tgt_br2" 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@156 -- # true 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:01.847 Cannot find device "nvmf_tgt_br" 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@158 -- # true 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:01.847 Cannot find device "nvmf_tgt_br2" 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@159 -- # true 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:01.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:01.847 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:01.847 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:02.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:22:02.106 00:22:02.106 --- 10.0.0.2 ping statistics --- 00:22:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.106 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:02.106 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:02.106 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:22:02.106 00:22:02.106 --- 10.0.0.3 ping statistics --- 00:22:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.106 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:02.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:02.106 00:22:02.106 --- 10.0.0.1 ping statistics --- 00:22:02.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.106 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@433 -- # return 0 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=90701 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 90701 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 90701 ']' 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:02.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:02.106 18:35:17 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.106 [2024-05-13 18:35:18.022131] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:22:02.106 [2024-05-13 18:35:18.022242] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.364 [2024-05-13 18:35:18.157843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.364 [2024-05-13 18:35:18.286190] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.364 [2024-05-13 18:35:18.286237] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.364 [2024-05-13 18:35:18.286249] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.364 [2024-05-13 18:35:18.286259] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.364 [2024-05-13 18:35:18.286267] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.364 [2024-05-13 18:35:18.286291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.622 [2024-05-13 18:35:18.453948] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.622 [2024-05-13 18:35:18.461860] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:02.622 [2024-05-13 18:35:18.462115] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.622 null0 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.622 null1 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90739 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90739 /tmp/host.sock 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 90739 ']' 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:02.622 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:02.622 18:35:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.622 [2024-05-13 18:35:18.554455] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:22:02.622 [2024-05-13 18:35:18.554557] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90739 ] 00:22:02.880 [2024-05-13 18:35:18.692859] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.880 [2024-05-13 18:35:18.823886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:03.823 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.824 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.082 [2024-05-13 18:35:19.910408] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:04.082 18:35:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.082 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:22:04.341 18:35:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:22:04.907 [2024-05-13 18:35:20.577154] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:04.907 [2024-05-13 18:35:20.577198] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:04.907 [2024-05-13 18:35:20.577230] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:04.907 [2024-05-13 18:35:20.663314] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:04.907 [2024-05-13 18:35:20.719481] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:04.907 [2024-05-13 18:35:20.719523] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:05.473 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.474 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.733 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.734 [2024-05-13 18:35:21.519032] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:05.734 [2024-05-13 18:35:21.520171] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:05.734 [2024-05-13 18:35:21.520210] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.734 [2024-05-13 18:35:21.606229] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:05.734 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.734 [2024-05-13 18:35:21.669516] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.734 [2024-05-13 18:35:21.669544] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.734 [2024-05-13 18:35:21.669552] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:05.993 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:05.993 18:35:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.928 [2024-05-13 18:35:22.828476] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:06.928 [2024-05-13 18:35:22.828529] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:06.928 [2024-05-13 18:35:22.837238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.928 [2024-05-13 18:35:22.837438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.928 [2024-05-13 18:35:22.837676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.928 [2024-05-13 18:35:22.837817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.928 [2024-05-13 18:35:22.837945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.928 [2024-05-13 18:35:22.838106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.928 [2024-05-13 18:35:22.838216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.928 [2024-05-13 18:35:22.838230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.928 [2024-05-13 18:35:22.838240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe0fb0 is same with the state(5) to be set 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:06.928 [2024-05-13 18:35:22.847190] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0fb0 (9): Bad file descriptor 00:22:06.928 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.928 [2024-05-13 18:35:22.857220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.928 [2024-05-13 18:35:22.857350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.928 [2024-05-13 18:35:22.857400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.928 [2024-05-13 18:35:22.857416] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe0fb0 with addr=10.0.0.2, port=4420 00:22:06.928 [2024-05-13 18:35:22.857429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe0fb0 is same with the state(5) to be set 00:22:06.928 [2024-05-13 18:35:22.857446] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0fb0 (9): Bad file descriptor 00:22:06.928 [2024-05-13 18:35:22.857471] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.928 [2024-05-13 18:35:22.857482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.928 [2024-05-13 18:35:22.857493] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.928 [2024-05-13 18:35:22.857509] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.928 [2024-05-13 18:35:22.867282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.928 [2024-05-13 18:35:22.867379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.928 [2024-05-13 18:35:22.867424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.928 [2024-05-13 18:35:22.867440] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe0fb0 with addr=10.0.0.2, port=4420 00:22:06.928 [2024-05-13 18:35:22.867450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe0fb0 is same with the state(5) to be set 00:22:06.928 [2024-05-13 18:35:22.867466] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0fb0 (9): Bad file descriptor 00:22:06.928 [2024-05-13 18:35:22.867489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.928 [2024-05-13 18:35:22.867499] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.928 [2024-05-13 18:35:22.867509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.928 [2024-05-13 18:35:22.867524] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.188 [2024-05-13 18:35:22.877335] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:07.188 [2024-05-13 18:35:22.877426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.877472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.877488] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe0fb0 with addr=10.0.0.2, port=4420 00:22:07.188 [2024-05-13 18:35:22.877498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe0fb0 is same with the state(5) to be set 00:22:07.188 [2024-05-13 18:35:22.877514] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0fb0 (9): Bad file descriptor 00:22:07.188 [2024-05-13 18:35:22.877537] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:07.188 [2024-05-13 18:35:22.877548] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:07.188 [2024-05-13 18:35:22.877557] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:07.188 [2024-05-13 18:35:22.877586] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.188 [2024-05-13 18:35:22.887392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:07.188 [2024-05-13 18:35:22.887470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.887514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.887530] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe0fb0 with addr=10.0.0.2, port=4420 00:22:07.188 [2024-05-13 18:35:22.887540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe0fb0 is same with the state(5) to be set 00:22:07.188 [2024-05-13 18:35:22.887556] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0fb0 (9): Bad file descriptor 00:22:07.188 [2024-05-13 18:35:22.887591] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:07.188 [2024-05-13 18:35:22.887603] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:07.188 [2024-05-13 18:35:22.887612] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:07.188 [2024-05-13 18:35:22.887627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.188 [2024-05-13 18:35:22.897441] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:07.188 [2024-05-13 18:35:22.897522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.897567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.897597] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe0fb0 with addr=10.0.0.2, port=4420 00:22:07.188 [2024-05-13 18:35:22.897608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe0fb0 is same with the state(5) to be set 00:22:07.188 [2024-05-13 18:35:22.897624] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0fb0 (9): Bad file descriptor 00:22:07.188 [2024-05-13 18:35:22.897647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:07.188 [2024-05-13 18:35:22.897658] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:07.188 [2024-05-13 18:35:22.897667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:07.188 [2024-05-13 18:35:22.897681] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.188 [2024-05-13 18:35:22.907493] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:07.188 [2024-05-13 18:35:22.907589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.907639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-13 18:35:22.907655] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fe0fb0 with addr=10.0.0.2, port=4420 00:22:07.188 [2024-05-13 18:35:22.907665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe0fb0 is same with the state(5) to be set 00:22:07.188 [2024-05-13 18:35:22.907681] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fe0fb0 (9): Bad file descriptor 00:22:07.188 [2024-05-13 18:35:22.907705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:07.188 [2024-05-13 18:35:22.907715] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:07.188 [2024-05-13 18:35:22.907724] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:07.188 [2024-05-13 18:35:22.907739] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:07.188 [2024-05-13 18:35:22.913718] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:07.188 [2024-05-13 18:35:22.913751] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:07.188 18:35:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.188 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:22:07.188 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:07.189 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:07.447 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.448 18:35:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.382 [2024-05-13 18:35:24.277714] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:08.382 [2024-05-13 18:35:24.277750] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:08.382 [2024-05-13 18:35:24.277770] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:08.641 [2024-05-13 18:35:24.363912] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:08.641 [2024-05-13 18:35:24.423485] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:08.641 [2024-05-13 18:35:24.423579] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.641 2024/05/13 18:35:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:08.641 request: 00:22:08.641 { 00:22:08.641 "method": "bdev_nvme_start_discovery", 00:22:08.641 "params": { 00:22:08.641 "name": "nvme", 00:22:08.641 "trtype": "tcp", 00:22:08.641 "traddr": "10.0.0.2", 00:22:08.641 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:08.641 "adrfam": "ipv4", 00:22:08.641 "trsvcid": "8009", 00:22:08.641 "wait_for_attach": true 00:22:08.641 } 00:22:08.641 } 00:22:08.641 Got JSON-RPC error response 00:22:08.641 GoRPCClient: error on JSON-RPC call 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:08.641 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.642 2024/05/13 18:35:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:22:08.642 request: 00:22:08.642 { 00:22:08.642 "method": "bdev_nvme_start_discovery", 00:22:08.642 "params": { 00:22:08.642 "name": "nvme_second", 00:22:08.642 "trtype": "tcp", 00:22:08.642 "traddr": "10.0.0.2", 00:22:08.642 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:08.642 "adrfam": "ipv4", 00:22:08.642 "trsvcid": "8009", 00:22:08.642 "wait_for_attach": true 00:22:08.642 } 00:22:08.642 } 00:22:08.642 Got JSON-RPC error response 00:22:08.642 GoRPCClient: error on JSON-RPC call 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:08.642 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.900 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.901 18:35:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.834 [2024-05-13 18:35:25.701148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.834 [2024-05-13 18:35:25.701253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.834 [2024-05-13 18:35:25.701272] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20509f0 with addr=10.0.0.2, port=8010 00:22:09.834 [2024-05-13 18:35:25.701295] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:09.834 [2024-05-13 18:35:25.701306] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:09.834 [2024-05-13 18:35:25.701315] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:10.769 [2024-05-13 18:35:26.701139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.769 [2024-05-13 18:35:26.701247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.769 [2024-05-13 18:35:26.701267] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20509f0 with addr=10.0.0.2, port=8010 00:22:10.769 [2024-05-13 18:35:26.701290] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:10.769 [2024-05-13 18:35:26.701299] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:10.769 [2024-05-13 18:35:26.701309] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:12.143 [2024-05-13 18:35:27.700977] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:12.143 2024/05/13 18:35:27 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:22:12.143 request: 00:22:12.143 { 00:22:12.143 "method": "bdev_nvme_start_discovery", 00:22:12.143 "params": { 00:22:12.143 "name": "nvme_second", 00:22:12.143 "trtype": "tcp", 00:22:12.143 "traddr": "10.0.0.2", 00:22:12.143 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:12.143 "adrfam": "ipv4", 00:22:12.143 "trsvcid": "8010", 00:22:12.143 "attach_timeout_ms": 3000 00:22:12.143 } 00:22:12.143 } 00:22:12.143 Got JSON-RPC error response 00:22:12.143 GoRPCClient: error on JSON-RPC call 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90739 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:12.143 rmmod nvme_tcp 00:22:12.143 rmmod nvme_fabrics 00:22:12.143 rmmod nvme_keyring 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 90701 ']' 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 90701 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 90701 ']' 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 90701 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90701 00:22:12.143 killing process with pid 90701 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90701' 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 90701 00:22:12.143 [2024-05-13 18:35:27.883130] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:12.143 18:35:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 90701 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:12.400 00:22:12.400 real 0m10.732s 00:22:12.400 user 0m21.499s 00:22:12.400 sys 0m1.691s 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:12.400 ************************************ 00:22:12.400 END TEST nvmf_host_discovery 00:22:12.400 ************************************ 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:12.400 18:35:28 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:12.400 18:35:28 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:12.400 18:35:28 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:12.400 18:35:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:12.400 ************************************ 00:22:12.400 START TEST nvmf_host_multipath_status 00:22:12.400 ************************************ 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:12.400 * Looking for test storage... 00:22:12.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.400 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:12.401 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:12.662 Cannot find device "nvmf_tgt_br" 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # true 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:12.662 Cannot find device "nvmf_tgt_br2" 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # true 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:12.662 Cannot find device "nvmf_tgt_br" 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # true 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:12.662 Cannot find device "nvmf_tgt_br2" 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # true 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:12.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:12.662 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:12.662 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:12.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:22:12.952 00:22:12.952 --- 10.0.0.2 ping statistics --- 00:22:12.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.952 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:12.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:12.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:22:12.952 00:22:12.952 --- 10.0.0.3 ping statistics --- 00:22:12.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.952 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:12.952 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:12.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:22:12.952 00:22:12.952 --- 10.0.0.1 ping statistics --- 00:22:12.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.953 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@433 -- # return 0 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=91221 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 91221 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 91221 ']' 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:12.953 18:35:28 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:12.953 [2024-05-13 18:35:28.777347] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:22:12.953 [2024-05-13 18:35:28.777461] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:13.211 [2024-05-13 18:35:28.915015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:13.211 [2024-05-13 18:35:29.049783] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:13.211 [2024-05-13 18:35:29.050077] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:13.211 [2024-05-13 18:35:29.050216] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:13.211 [2024-05-13 18:35:29.050348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:13.211 [2024-05-13 18:35:29.050383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:13.211 [2024-05-13 18:35:29.050630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:13.211 [2024-05-13 18:35:29.050640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=91221 00:22:14.144 18:35:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:14.144 [2024-05-13 18:35:30.053242] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.144 18:35:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:14.708 Malloc0 00:22:14.708 18:35:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:14.966 18:35:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:15.223 18:35:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.223 [2024-05-13 18:35:31.135632] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:15.223 [2024-05-13 18:35:31.135943] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.223 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:15.481 [2024-05-13 18:35:31.384009] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:15.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=91320 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 91320 /var/tmp/bdevperf.sock 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 91320 ']' 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:15.481 18:35:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.852 18:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:16.852 18:35:32 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:22:16.852 18:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:16.852 18:35:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:17.417 Nvme0n1 00:22:17.417 18:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:17.675 Nvme0n1 00:22:17.675 18:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:17.675 18:35:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:20.212 18:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:20.212 18:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:20.212 18:35:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:20.212 18:35:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.586 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:22.151 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.151 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:22.151 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.151 18:35:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:22.409 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.409 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:22.409 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:22.409 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.667 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.667 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:22.667 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.667 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.925 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.925 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:22.925 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.925 18:35:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:23.490 18:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.490 18:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:23.490 18:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:23.748 18:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:24.006 18:35:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:24.960 18:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:24.960 18:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:24.960 18:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.960 18:35:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.255 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.255 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:25.255 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.255 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.513 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.513 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.513 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.513 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.792 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.792 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.793 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.793 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.052 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.052 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.052 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.052 18:35:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.310 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.310 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.310 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.310 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.568 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.568 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:26.568 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:26.827 18:35:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:27.393 18:35:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:28.338 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:28.338 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:28.338 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.338 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.599 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.599 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:28.599 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.599 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.858 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.858 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.858 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.858 18:35:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:29.119 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.119 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:29.119 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.119 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.384 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.384 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.384 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.384 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.671 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.671 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.671 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.671 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.930 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.930 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:29.930 18:35:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:30.188 18:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:30.756 18:35:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:31.692 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:31.692 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.692 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.692 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.950 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.950 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:31.950 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.950 18:35:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:32.207 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.207 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:32.207 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.207 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:32.465 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.465 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:32.465 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.465 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.722 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.722 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.722 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.722 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.980 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.980 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:32.980 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.980 18:35:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:33.237 18:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.237 18:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:33.237 18:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:33.496 18:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:34.061 18:35:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:35.010 18:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:35.010 18:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:35.010 18:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.010 18:35:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:35.268 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.268 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:35.268 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.268 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:35.525 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.525 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:35.525 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.525 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:35.783 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.783 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:35.783 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.783 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:36.041 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.041 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:36.041 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.041 18:35:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:36.299 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.299 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:36.299 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.299 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:36.556 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.556 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:36.556 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:37.122 18:35:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:37.122 18:35:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.498 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:38.757 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:38.757 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:38.757 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:38.757 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.324 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.324 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:39.324 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.324 18:35:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:39.324 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.324 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:39.324 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:39.324 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.582 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:39.582 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:39.582 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.582 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:40.149 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.149 18:35:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:40.149 18:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:40.149 18:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:40.407 18:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:40.666 18:35:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:41.600 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:41.600 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:41.600 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.600 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:41.859 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:41.859 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:41.859 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:41.859 18:35:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.425 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:42.991 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.991 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:42.991 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.991 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:43.250 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.250 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:43.250 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:43.250 18:35:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.509 18:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.509 18:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:43.509 18:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:43.792 18:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:44.067 18:35:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:45.000 18:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:45.000 18:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:45.000 18:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.000 18:36:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:45.256 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:45.256 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:45.256 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:45.256 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.821 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.821 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:45.821 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.821 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:45.821 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.821 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:45.822 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.822 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.079 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.079 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:46.079 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.079 18:36:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:46.337 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.337 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:46.337 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.337 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:46.595 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.595 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:46.595 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:46.852 18:36:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:47.418 18:36:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:48.351 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:48.351 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:48.351 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.351 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:48.609 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.609 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:48.609 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.609 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:48.866 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.866 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:48.866 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.866 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:49.124 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.124 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:49.124 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.124 18:36:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:49.381 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.381 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:49.381 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:49.381 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.639 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.639 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:49.639 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.639 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:50.202 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:50.202 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:50.202 18:36:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:50.460 18:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:50.719 18:36:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:51.654 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:51.654 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:51.654 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.654 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:51.912 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.912 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:51.912 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:51.912 18:36:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.170 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.170 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:52.170 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.170 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:52.427 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.427 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:52.427 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.427 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.993 18:36:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 91320 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 91320 ']' 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 91320 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91320 00:22:53.252 killing process with pid 91320 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91320' 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 91320 00:22:53.252 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 91320 00:22:53.511 Connection closed with partial response: 00:22:53.511 00:22:53.511 00:22:53.773 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 91320 00:22:53.773 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:53.773 [2024-05-13 18:35:31.453226] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:22:53.773 [2024-05-13 18:35:31.453349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91320 ] 00:22:53.773 [2024-05-13 18:35:31.589028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.773 [2024-05-13 18:35:31.707069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.773 Running I/O for 90 seconds... 00:22:53.773 [2024-05-13 18:35:49.404193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.773 [2024-05-13 18:35:49.404289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:53.773 [2024-05-13 18:35:49.404370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.773 [2024-05-13 18:35:49.404395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:53.773 [2024-05-13 18:35:49.404419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.773 [2024-05-13 18:35:49.404436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:53.773 [2024-05-13 18:35:49.404458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.773 [2024-05-13 18:35:49.404473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:53.773 [2024-05-13 18:35:49.404504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.773 [2024-05-13 18:35:49.404532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.404967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.404982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.405768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.405783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:53.774 [2024-05-13 18:35:49.406610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.774 [2024-05-13 18:35:49.406631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.406656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.406672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.406695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.406711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.406734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.406831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.406860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.406876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.406916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.406940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.406955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.406978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:37000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.406996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:37008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:37048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.407350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.407390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.407441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.407481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.407538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.407603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.407662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:37056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:37064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:37080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.407979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:37104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.407998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:37112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:37120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:37128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:37136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:37144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:37152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:37160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:37168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:37176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.775 [2024-05-13 18:35:49.408589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.408652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.408729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.775 [2024-05-13 18:35:49.408775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:53.775 [2024-05-13 18:35:49.408815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.408832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.408859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.408874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.408901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.408925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.408966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.408987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:36376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:36392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:36448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.409858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:37184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.409916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.409956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:37192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.409974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.410016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.410068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:37216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.410119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:37224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.410185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.410232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:36472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.776 [2024-05-13 18:35:49.410614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:53.776 [2024-05-13 18:35:49.410779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.776 [2024-05-13 18:35:49.410794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:35:49.410822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:36560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:35:49.410838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:35:49.410866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:35:49.410899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:35:49.410941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:35:49.410960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:35:49.410988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:35:49.411012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.777 [2024-05-13 18:36:06.434263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.777 [2024-05-13 18:36:06.434351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.777 [2024-05-13 18:36:06.434391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.777 [2024-05-13 18:36:06.434428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.777 [2024-05-13 18:36:06.434464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.777 [2024-05-13 18:36:06.434534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.434593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.434648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.434973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.434999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:47944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:48016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:48056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:48088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:48120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:48008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:48040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:48064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:48096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:48168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:48200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:48232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:48128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.435970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:53.777 [2024-05-13 18:36:06.435991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:48160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.777 [2024-05-13 18:36:06.436006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.436027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.778 [2024-05-13 18:36:06.436041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.436063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:48224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:53.778 [2024-05-13 18:36:06.436078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:53.778 [2024-05-13 18:36:06.438520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:53.778 [2024-05-13 18:36:06.438535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:53.778 Received shutdown signal, test time was about 35.569875 seconds 00:22:53.778 00:22:53.778 Latency(us) 00:22:53.778 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.778 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:53.778 Verification LBA range: start 0x0 length 0x4000 00:22:53.778 Nvme0n1 : 35.57 8150.37 31.84 0.00 0.00 15672.16 871.33 4026531.84 00:22:53.778 =================================================================================================================== 00:22:53.778 Total : 8150.37 31.84 0.00 0.00 15672.16 871.33 4026531.84 00:22:53.778 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:54.128 rmmod nvme_tcp 00:22:54.128 rmmod nvme_fabrics 00:22:54.128 rmmod nvme_keyring 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 91221 ']' 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 91221 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 91221 ']' 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 91221 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 91221 00:22:54.128 killing process with pid 91221 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:22:54.128 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:22:54.129 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 91221' 00:22:54.129 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 91221 00:22:54.129 [2024-05-13 18:36:09.879119] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:54.129 18:36:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 91221 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.399 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.659 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:54.659 ************************************ 00:22:54.659 END TEST nvmf_host_multipath_status 00:22:54.659 ************************************ 00:22:54.659 00:22:54.659 real 0m42.122s 00:22:54.659 user 2m17.591s 00:22:54.659 sys 0m10.604s 00:22:54.659 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:54.659 18:36:10 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:54.659 18:36:10 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:54.659 18:36:10 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:54.659 18:36:10 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:54.659 18:36:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:54.659 ************************************ 00:22:54.659 START TEST nvmf_discovery_remove_ifc 00:22:54.659 ************************************ 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:54.659 * Looking for test storage... 00:22:54.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:54.659 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # nvmf_veth_init 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:54.660 Cannot find device "nvmf_tgt_br" 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # true 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:54.660 Cannot find device "nvmf_tgt_br2" 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # true 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:54.660 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:54.919 Cannot find device "nvmf_tgt_br" 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # true 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:54.919 Cannot find device "nvmf_tgt_br2" 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # true 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:54.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:54.919 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:54.919 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:55.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:22:55.179 00:22:55.179 --- 10.0.0.2 ping statistics --- 00:22:55.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.179 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:55.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:55.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:22:55.179 00:22:55.179 --- 10.0.0.3 ping statistics --- 00:22:55.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.179 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:55.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:22:55.179 00:22:55.179 --- 10.0.0.1 ping statistics --- 00:22:55.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.179 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@433 -- # return 0 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=92633 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 92633 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 92633 ']' 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:55.179 18:36:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:55.179 [2024-05-13 18:36:10.986344] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:22:55.179 [2024-05-13 18:36:10.986458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.438 [2024-05-13 18:36:11.125608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.438 [2024-05-13 18:36:11.250963] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.438 [2024-05-13 18:36:11.251040] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.438 [2024-05-13 18:36:11.251059] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.438 [2024-05-13 18:36:11.251072] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.438 [2024-05-13 18:36:11.251083] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.438 [2024-05-13 18:36:11.251131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.374 18:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:56.374 18:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:22:56.374 18:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.374 18:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:56.374 18:36:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.374 [2024-05-13 18:36:12.024270] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.374 [2024-05-13 18:36:12.032184] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:56.374 [2024-05-13 18:36:12.032502] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:56.374 null0 00:22:56.374 [2024-05-13 18:36:12.064326] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=92683 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 92683 /tmp/host.sock 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 92683 ']' 00:22:56.374 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:56.374 18:36:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.374 [2024-05-13 18:36:12.148541] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:22:56.374 [2024-05-13 18:36:12.148662] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92683 ] 00:22:56.374 [2024-05-13 18:36:12.289018] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.632 [2024-05-13 18:36:12.448130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:57.199 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:57.199 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:22:57.199 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:57.199 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:57.199 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.199 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.457 18:36:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.390 [2024-05-13 18:36:14.297901] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:58.390 [2024-05-13 18:36:14.297952] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:58.390 [2024-05-13 18:36:14.297973] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:58.648 [2024-05-13 18:36:14.384072] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:58.648 [2024-05-13 18:36:14.440397] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:58.648 [2024-05-13 18:36:14.440463] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:58.648 [2024-05-13 18:36:14.440494] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:58.648 [2024-05-13 18:36:14.440513] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:58.648 [2024-05-13 18:36:14.440543] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.648 [2024-05-13 18:36:14.446159] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1912aa0 was disconnected and freed. delete nvme_qpair. 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:58.648 18:36:14 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:00.049 18:36:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:00.983 18:36:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:01.917 18:36:17 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:02.852 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.852 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.852 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.852 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.852 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.852 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.852 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:03.110 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.110 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:03.110 18:36:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.045 [2024-05-13 18:36:19.867970] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:04.045 [2024-05-13 18:36:19.868228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.045 [2024-05-13 18:36:19.868377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.045 [2024-05-13 18:36:19.868503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.045 [2024-05-13 18:36:19.868565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.045 [2024-05-13 18:36:19.868745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.045 [2024-05-13 18:36:19.868869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.045 [2024-05-13 18:36:19.869086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.045 [2024-05-13 18:36:19.869141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.045 [2024-05-13 18:36:19.869255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:04.045 [2024-05-13 18:36:19.869266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:04.045 [2024-05-13 18:36:19.869277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc030 is same with the state(5) to be set 00:23:04.045 [2024-05-13 18:36:19.877966] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dc030 (9): Bad file descriptor 00:23:04.045 [2024-05-13 18:36:19.887990] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:04.045 18:36:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.980 18:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:04.980 18:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:04.980 18:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.980 18:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:04.980 18:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.980 18:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:04.980 18:36:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.980 [2024-05-13 18:36:20.916662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:06.356 [2024-05-13 18:36:21.940728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:06.356 [2024-05-13 18:36:21.940867] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18dc030 with addr=10.0.0.2, port=4420 00:23:06.356 [2024-05-13 18:36:21.940904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc030 is same with the state(5) to be set 00:23:06.356 [2024-05-13 18:36:21.941819] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dc030 (9): Bad file descriptor 00:23:06.356 [2024-05-13 18:36:21.941900] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:06.356 [2024-05-13 18:36:21.941956] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:06.356 [2024-05-13 18:36:21.942026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.356 [2024-05-13 18:36:21.942056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.356 [2024-05-13 18:36:21.942083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.356 [2024-05-13 18:36:21.942104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.356 [2024-05-13 18:36:21.942127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.356 [2024-05-13 18:36:21.942147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.356 [2024-05-13 18:36:21.942169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.356 [2024-05-13 18:36:21.942189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.356 [2024-05-13 18:36:21.942211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:06.356 [2024-05-13 18:36:21.942231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:06.356 [2024-05-13 18:36:21.942252] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:06.356 [2024-05-13 18:36:21.942316] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187b5a0 (9): Bad file descriptor 00:23:06.356 [2024-05-13 18:36:21.943312] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:06.356 [2024-05-13 18:36:21.943358] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:06.356 18:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.356 18:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:06.356 18:36:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.291 18:36:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:07.291 18:36:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.225 [2024-05-13 18:36:23.949208] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:08.225 [2024-05-13 18:36:23.949266] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:08.225 [2024-05-13 18:36:23.949285] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:08.225 [2024-05-13 18:36:24.035344] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:08.225 [2024-05-13 18:36:24.090616] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:08.225 [2024-05-13 18:36:24.090677] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:08.225 [2024-05-13 18:36:24.090720] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:08.225 [2024-05-13 18:36:24.090739] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:08.225 [2024-05-13 18:36:24.090749] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:08.225 [2024-05-13 18:36:24.097753] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x18cca50 was disconnected and freed. delete nvme_qpair. 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 92683 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 92683 ']' 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 92683 00:23:08.225 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:23:08.483 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.483 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92683 00:23:08.483 killing process with pid 92683 00:23:08.483 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:08.483 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:08.483 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92683' 00:23:08.483 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 92683 00:23:08.483 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 92683 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:08.742 rmmod nvme_tcp 00:23:08.742 rmmod nvme_fabrics 00:23:08.742 rmmod nvme_keyring 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 92633 ']' 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 92633 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 92633 ']' 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 92633 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 92633 00:23:08.742 killing process with pid 92633 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 92633' 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 92633 00:23:08.742 [2024-05-13 18:36:24.558423] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:08.742 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 92633 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:09.001 00:23:09.001 real 0m14.437s 00:23:09.001 user 0m24.713s 00:23:09.001 sys 0m1.719s 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:09.001 18:36:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.001 ************************************ 00:23:09.001 END TEST nvmf_discovery_remove_ifc 00:23:09.001 ************************************ 00:23:09.001 18:36:24 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:09.001 18:36:24 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:09.001 18:36:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:09.001 18:36:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.001 ************************************ 00:23:09.001 START TEST nvmf_identify_kernel_target 00:23:09.001 ************************************ 00:23:09.001 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:09.261 * Looking for test storage... 00:23:09.261 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.261 18:36:24 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:09.261 Cannot find device "nvmf_tgt_br" 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # true 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:09.261 Cannot find device "nvmf_tgt_br2" 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # true 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:09.261 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:09.262 Cannot find device "nvmf_tgt_br" 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # true 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:09.262 Cannot find device "nvmf_tgt_br2" 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # true 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:09.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:09.262 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:09.262 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:09.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:23:09.521 00:23:09.521 --- 10.0.0.2 ping statistics --- 00:23:09.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.521 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:09.521 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:09.521 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:23:09.521 00:23:09.521 --- 10.0.0.3 ping statistics --- 00:23:09.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.521 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:09.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:23:09.521 00:23:09.521 --- 10.0.0.1 ping statistics --- 00:23:09.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.521 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@433 -- # return 0 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:09.521 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:09.780 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:10.039 Waiting for block devices as requested 00:23:10.039 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:10.039 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:10.039 18:36:25 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:10.298 No valid GPT data, bailing 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:10.298 No valid GPT data, bailing 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:10.298 No valid GPT data, bailing 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:10.298 No valid GPT data, bailing 00:23:10.298 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:10.557 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:10.557 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:10.557 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:10.557 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -a 10.0.0.1 -t tcp -s 4420 00:23:10.558 00:23:10.558 Discovery Log Number of Records 2, Generation counter 2 00:23:10.558 =====Discovery Log Entry 0====== 00:23:10.558 trtype: tcp 00:23:10.558 adrfam: ipv4 00:23:10.558 subtype: current discovery subsystem 00:23:10.558 treq: not specified, sq flow control disable supported 00:23:10.558 portid: 1 00:23:10.558 trsvcid: 4420 00:23:10.558 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:10.558 traddr: 10.0.0.1 00:23:10.558 eflags: none 00:23:10.558 sectype: none 00:23:10.558 =====Discovery Log Entry 1====== 00:23:10.558 trtype: tcp 00:23:10.558 adrfam: ipv4 00:23:10.558 subtype: nvme subsystem 00:23:10.558 treq: not specified, sq flow control disable supported 00:23:10.558 portid: 1 00:23:10.558 trsvcid: 4420 00:23:10.558 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:10.558 traddr: 10.0.0.1 00:23:10.558 eflags: none 00:23:10.558 sectype: none 00:23:10.558 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:10.558 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:10.558 ===================================================== 00:23:10.558 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:10.558 ===================================================== 00:23:10.558 Controller Capabilities/Features 00:23:10.558 ================================ 00:23:10.558 Vendor ID: 0000 00:23:10.558 Subsystem Vendor ID: 0000 00:23:10.558 Serial Number: 952e21ab90ba137b0fde 00:23:10.558 Model Number: Linux 00:23:10.558 Firmware Version: 6.7.0-68 00:23:10.558 Recommended Arb Burst: 0 00:23:10.558 IEEE OUI Identifier: 00 00 00 00:23:10.558 Multi-path I/O 00:23:10.558 May have multiple subsystem ports: No 00:23:10.558 May have multiple controllers: No 00:23:10.558 Associated with SR-IOV VF: No 00:23:10.558 Max Data Transfer Size: Unlimited 00:23:10.558 Max Number of Namespaces: 0 00:23:10.558 Max Number of I/O Queues: 1024 00:23:10.558 NVMe Specification Version (VS): 1.3 00:23:10.558 NVMe Specification Version (Identify): 1.3 00:23:10.558 Maximum Queue Entries: 1024 00:23:10.558 Contiguous Queues Required: No 00:23:10.558 Arbitration Mechanisms Supported 00:23:10.558 Weighted Round Robin: Not Supported 00:23:10.558 Vendor Specific: Not Supported 00:23:10.558 Reset Timeout: 7500 ms 00:23:10.558 Doorbell Stride: 4 bytes 00:23:10.558 NVM Subsystem Reset: Not Supported 00:23:10.558 Command Sets Supported 00:23:10.558 NVM Command Set: Supported 00:23:10.558 Boot Partition: Not Supported 00:23:10.558 Memory Page Size Minimum: 4096 bytes 00:23:10.558 Memory Page Size Maximum: 4096 bytes 00:23:10.558 Persistent Memory Region: Not Supported 00:23:10.558 Optional Asynchronous Events Supported 00:23:10.558 Namespace Attribute Notices: Not Supported 00:23:10.558 Firmware Activation Notices: Not Supported 00:23:10.558 ANA Change Notices: Not Supported 00:23:10.558 PLE Aggregate Log Change Notices: Not Supported 00:23:10.558 LBA Status Info Alert Notices: Not Supported 00:23:10.558 EGE Aggregate Log Change Notices: Not Supported 00:23:10.558 Normal NVM Subsystem Shutdown event: Not Supported 00:23:10.558 Zone Descriptor Change Notices: Not Supported 00:23:10.558 Discovery Log Change Notices: Supported 00:23:10.558 Controller Attributes 00:23:10.558 128-bit Host Identifier: Not Supported 00:23:10.558 Non-Operational Permissive Mode: Not Supported 00:23:10.558 NVM Sets: Not Supported 00:23:10.558 Read Recovery Levels: Not Supported 00:23:10.558 Endurance Groups: Not Supported 00:23:10.558 Predictable Latency Mode: Not Supported 00:23:10.558 Traffic Based Keep ALive: Not Supported 00:23:10.558 Namespace Granularity: Not Supported 00:23:10.558 SQ Associations: Not Supported 00:23:10.558 UUID List: Not Supported 00:23:10.558 Multi-Domain Subsystem: Not Supported 00:23:10.558 Fixed Capacity Management: Not Supported 00:23:10.558 Variable Capacity Management: Not Supported 00:23:10.558 Delete Endurance Group: Not Supported 00:23:10.558 Delete NVM Set: Not Supported 00:23:10.558 Extended LBA Formats Supported: Not Supported 00:23:10.558 Flexible Data Placement Supported: Not Supported 00:23:10.558 00:23:10.558 Controller Memory Buffer Support 00:23:10.558 ================================ 00:23:10.558 Supported: No 00:23:10.558 00:23:10.558 Persistent Memory Region Support 00:23:10.558 ================================ 00:23:10.558 Supported: No 00:23:10.558 00:23:10.558 Admin Command Set Attributes 00:23:10.558 ============================ 00:23:10.558 Security Send/Receive: Not Supported 00:23:10.558 Format NVM: Not Supported 00:23:10.558 Firmware Activate/Download: Not Supported 00:23:10.558 Namespace Management: Not Supported 00:23:10.558 Device Self-Test: Not Supported 00:23:10.558 Directives: Not Supported 00:23:10.558 NVMe-MI: Not Supported 00:23:10.558 Virtualization Management: Not Supported 00:23:10.558 Doorbell Buffer Config: Not Supported 00:23:10.558 Get LBA Status Capability: Not Supported 00:23:10.558 Command & Feature Lockdown Capability: Not Supported 00:23:10.558 Abort Command Limit: 1 00:23:10.558 Async Event Request Limit: 1 00:23:10.558 Number of Firmware Slots: N/A 00:23:10.558 Firmware Slot 1 Read-Only: N/A 00:23:10.558 Firmware Activation Without Reset: N/A 00:23:10.558 Multiple Update Detection Support: N/A 00:23:10.558 Firmware Update Granularity: No Information Provided 00:23:10.558 Per-Namespace SMART Log: No 00:23:10.558 Asymmetric Namespace Access Log Page: Not Supported 00:23:10.558 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:10.558 Command Effects Log Page: Not Supported 00:23:10.558 Get Log Page Extended Data: Supported 00:23:10.558 Telemetry Log Pages: Not Supported 00:23:10.558 Persistent Event Log Pages: Not Supported 00:23:10.558 Supported Log Pages Log Page: May Support 00:23:10.558 Commands Supported & Effects Log Page: Not Supported 00:23:10.558 Feature Identifiers & Effects Log Page:May Support 00:23:10.558 NVMe-MI Commands & Effects Log Page: May Support 00:23:10.558 Data Area 4 for Telemetry Log: Not Supported 00:23:10.558 Error Log Page Entries Supported: 1 00:23:10.558 Keep Alive: Not Supported 00:23:10.558 00:23:10.558 NVM Command Set Attributes 00:23:10.558 ========================== 00:23:10.558 Submission Queue Entry Size 00:23:10.558 Max: 1 00:23:10.558 Min: 1 00:23:10.558 Completion Queue Entry Size 00:23:10.558 Max: 1 00:23:10.558 Min: 1 00:23:10.558 Number of Namespaces: 0 00:23:10.558 Compare Command: Not Supported 00:23:10.558 Write Uncorrectable Command: Not Supported 00:23:10.558 Dataset Management Command: Not Supported 00:23:10.558 Write Zeroes Command: Not Supported 00:23:10.558 Set Features Save Field: Not Supported 00:23:10.558 Reservations: Not Supported 00:23:10.558 Timestamp: Not Supported 00:23:10.558 Copy: Not Supported 00:23:10.558 Volatile Write Cache: Not Present 00:23:10.558 Atomic Write Unit (Normal): 1 00:23:10.558 Atomic Write Unit (PFail): 1 00:23:10.558 Atomic Compare & Write Unit: 1 00:23:10.558 Fused Compare & Write: Not Supported 00:23:10.558 Scatter-Gather List 00:23:10.558 SGL Command Set: Supported 00:23:10.558 SGL Keyed: Not Supported 00:23:10.558 SGL Bit Bucket Descriptor: Not Supported 00:23:10.558 SGL Metadata Pointer: Not Supported 00:23:10.558 Oversized SGL: Not Supported 00:23:10.558 SGL Metadata Address: Not Supported 00:23:10.558 SGL Offset: Supported 00:23:10.558 Transport SGL Data Block: Not Supported 00:23:10.558 Replay Protected Memory Block: Not Supported 00:23:10.558 00:23:10.558 Firmware Slot Information 00:23:10.558 ========================= 00:23:10.558 Active slot: 0 00:23:10.558 00:23:10.558 00:23:10.558 Error Log 00:23:10.558 ========= 00:23:10.558 00:23:10.558 Active Namespaces 00:23:10.558 ================= 00:23:10.558 Discovery Log Page 00:23:10.558 ================== 00:23:10.558 Generation Counter: 2 00:23:10.558 Number of Records: 2 00:23:10.558 Record Format: 0 00:23:10.558 00:23:10.558 Discovery Log Entry 0 00:23:10.558 ---------------------- 00:23:10.559 Transport Type: 3 (TCP) 00:23:10.559 Address Family: 1 (IPv4) 00:23:10.559 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:10.559 Entry Flags: 00:23:10.559 Duplicate Returned Information: 0 00:23:10.559 Explicit Persistent Connection Support for Discovery: 0 00:23:10.559 Transport Requirements: 00:23:10.559 Secure Channel: Not Specified 00:23:10.559 Port ID: 1 (0x0001) 00:23:10.559 Controller ID: 65535 (0xffff) 00:23:10.559 Admin Max SQ Size: 32 00:23:10.559 Transport Service Identifier: 4420 00:23:10.559 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:10.559 Transport Address: 10.0.0.1 00:23:10.559 Discovery Log Entry 1 00:23:10.559 ---------------------- 00:23:10.559 Transport Type: 3 (TCP) 00:23:10.559 Address Family: 1 (IPv4) 00:23:10.559 Subsystem Type: 2 (NVM Subsystem) 00:23:10.559 Entry Flags: 00:23:10.559 Duplicate Returned Information: 0 00:23:10.559 Explicit Persistent Connection Support for Discovery: 0 00:23:10.559 Transport Requirements: 00:23:10.559 Secure Channel: Not Specified 00:23:10.559 Port ID: 1 (0x0001) 00:23:10.559 Controller ID: 65535 (0xffff) 00:23:10.559 Admin Max SQ Size: 32 00:23:10.559 Transport Service Identifier: 4420 00:23:10.559 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:10.559 Transport Address: 10.0.0.1 00:23:10.559 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:10.819 get_feature(0x01) failed 00:23:10.819 get_feature(0x02) failed 00:23:10.819 get_feature(0x04) failed 00:23:10.819 ===================================================== 00:23:10.819 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:10.819 ===================================================== 00:23:10.819 Controller Capabilities/Features 00:23:10.819 ================================ 00:23:10.819 Vendor ID: 0000 00:23:10.819 Subsystem Vendor ID: 0000 00:23:10.819 Serial Number: 435da6d1f697ff83b7fa 00:23:10.819 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:10.819 Firmware Version: 6.7.0-68 00:23:10.819 Recommended Arb Burst: 6 00:23:10.819 IEEE OUI Identifier: 00 00 00 00:23:10.819 Multi-path I/O 00:23:10.819 May have multiple subsystem ports: Yes 00:23:10.819 May have multiple controllers: Yes 00:23:10.819 Associated with SR-IOV VF: No 00:23:10.819 Max Data Transfer Size: Unlimited 00:23:10.819 Max Number of Namespaces: 1024 00:23:10.819 Max Number of I/O Queues: 128 00:23:10.819 NVMe Specification Version (VS): 1.3 00:23:10.819 NVMe Specification Version (Identify): 1.3 00:23:10.819 Maximum Queue Entries: 1024 00:23:10.819 Contiguous Queues Required: No 00:23:10.819 Arbitration Mechanisms Supported 00:23:10.819 Weighted Round Robin: Not Supported 00:23:10.819 Vendor Specific: Not Supported 00:23:10.819 Reset Timeout: 7500 ms 00:23:10.819 Doorbell Stride: 4 bytes 00:23:10.819 NVM Subsystem Reset: Not Supported 00:23:10.819 Command Sets Supported 00:23:10.819 NVM Command Set: Supported 00:23:10.819 Boot Partition: Not Supported 00:23:10.819 Memory Page Size Minimum: 4096 bytes 00:23:10.819 Memory Page Size Maximum: 4096 bytes 00:23:10.819 Persistent Memory Region: Not Supported 00:23:10.819 Optional Asynchronous Events Supported 00:23:10.819 Namespace Attribute Notices: Supported 00:23:10.819 Firmware Activation Notices: Not Supported 00:23:10.819 ANA Change Notices: Supported 00:23:10.819 PLE Aggregate Log Change Notices: Not Supported 00:23:10.819 LBA Status Info Alert Notices: Not Supported 00:23:10.819 EGE Aggregate Log Change Notices: Not Supported 00:23:10.819 Normal NVM Subsystem Shutdown event: Not Supported 00:23:10.819 Zone Descriptor Change Notices: Not Supported 00:23:10.819 Discovery Log Change Notices: Not Supported 00:23:10.819 Controller Attributes 00:23:10.819 128-bit Host Identifier: Supported 00:23:10.819 Non-Operational Permissive Mode: Not Supported 00:23:10.819 NVM Sets: Not Supported 00:23:10.819 Read Recovery Levels: Not Supported 00:23:10.819 Endurance Groups: Not Supported 00:23:10.819 Predictable Latency Mode: Not Supported 00:23:10.819 Traffic Based Keep ALive: Supported 00:23:10.819 Namespace Granularity: Not Supported 00:23:10.819 SQ Associations: Not Supported 00:23:10.819 UUID List: Not Supported 00:23:10.819 Multi-Domain Subsystem: Not Supported 00:23:10.819 Fixed Capacity Management: Not Supported 00:23:10.819 Variable Capacity Management: Not Supported 00:23:10.819 Delete Endurance Group: Not Supported 00:23:10.819 Delete NVM Set: Not Supported 00:23:10.819 Extended LBA Formats Supported: Not Supported 00:23:10.819 Flexible Data Placement Supported: Not Supported 00:23:10.819 00:23:10.819 Controller Memory Buffer Support 00:23:10.819 ================================ 00:23:10.819 Supported: No 00:23:10.819 00:23:10.819 Persistent Memory Region Support 00:23:10.819 ================================ 00:23:10.819 Supported: No 00:23:10.819 00:23:10.819 Admin Command Set Attributes 00:23:10.819 ============================ 00:23:10.819 Security Send/Receive: Not Supported 00:23:10.819 Format NVM: Not Supported 00:23:10.819 Firmware Activate/Download: Not Supported 00:23:10.819 Namespace Management: Not Supported 00:23:10.819 Device Self-Test: Not Supported 00:23:10.819 Directives: Not Supported 00:23:10.819 NVMe-MI: Not Supported 00:23:10.819 Virtualization Management: Not Supported 00:23:10.819 Doorbell Buffer Config: Not Supported 00:23:10.819 Get LBA Status Capability: Not Supported 00:23:10.819 Command & Feature Lockdown Capability: Not Supported 00:23:10.819 Abort Command Limit: 4 00:23:10.819 Async Event Request Limit: 4 00:23:10.819 Number of Firmware Slots: N/A 00:23:10.819 Firmware Slot 1 Read-Only: N/A 00:23:10.819 Firmware Activation Without Reset: N/A 00:23:10.819 Multiple Update Detection Support: N/A 00:23:10.819 Firmware Update Granularity: No Information Provided 00:23:10.819 Per-Namespace SMART Log: Yes 00:23:10.819 Asymmetric Namespace Access Log Page: Supported 00:23:10.819 ANA Transition Time : 10 sec 00:23:10.819 00:23:10.819 Asymmetric Namespace Access Capabilities 00:23:10.819 ANA Optimized State : Supported 00:23:10.819 ANA Non-Optimized State : Supported 00:23:10.819 ANA Inaccessible State : Supported 00:23:10.819 ANA Persistent Loss State : Supported 00:23:10.819 ANA Change State : Supported 00:23:10.819 ANAGRPID is not changed : No 00:23:10.819 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:10.819 00:23:10.819 ANA Group Identifier Maximum : 128 00:23:10.819 Number of ANA Group Identifiers : 128 00:23:10.819 Max Number of Allowed Namespaces : 1024 00:23:10.819 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:10.819 Command Effects Log Page: Supported 00:23:10.819 Get Log Page Extended Data: Supported 00:23:10.819 Telemetry Log Pages: Not Supported 00:23:10.819 Persistent Event Log Pages: Not Supported 00:23:10.819 Supported Log Pages Log Page: May Support 00:23:10.819 Commands Supported & Effects Log Page: Not Supported 00:23:10.819 Feature Identifiers & Effects Log Page:May Support 00:23:10.819 NVMe-MI Commands & Effects Log Page: May Support 00:23:10.819 Data Area 4 for Telemetry Log: Not Supported 00:23:10.819 Error Log Page Entries Supported: 128 00:23:10.819 Keep Alive: Supported 00:23:10.819 Keep Alive Granularity: 1000 ms 00:23:10.819 00:23:10.819 NVM Command Set Attributes 00:23:10.819 ========================== 00:23:10.819 Submission Queue Entry Size 00:23:10.819 Max: 64 00:23:10.819 Min: 64 00:23:10.819 Completion Queue Entry Size 00:23:10.819 Max: 16 00:23:10.819 Min: 16 00:23:10.820 Number of Namespaces: 1024 00:23:10.820 Compare Command: Not Supported 00:23:10.820 Write Uncorrectable Command: Not Supported 00:23:10.820 Dataset Management Command: Supported 00:23:10.820 Write Zeroes Command: Supported 00:23:10.820 Set Features Save Field: Not Supported 00:23:10.820 Reservations: Not Supported 00:23:10.820 Timestamp: Not Supported 00:23:10.820 Copy: Not Supported 00:23:10.820 Volatile Write Cache: Present 00:23:10.820 Atomic Write Unit (Normal): 1 00:23:10.820 Atomic Write Unit (PFail): 1 00:23:10.820 Atomic Compare & Write Unit: 1 00:23:10.820 Fused Compare & Write: Not Supported 00:23:10.820 Scatter-Gather List 00:23:10.820 SGL Command Set: Supported 00:23:10.820 SGL Keyed: Not Supported 00:23:10.820 SGL Bit Bucket Descriptor: Not Supported 00:23:10.820 SGL Metadata Pointer: Not Supported 00:23:10.820 Oversized SGL: Not Supported 00:23:10.820 SGL Metadata Address: Not Supported 00:23:10.820 SGL Offset: Supported 00:23:10.820 Transport SGL Data Block: Not Supported 00:23:10.820 Replay Protected Memory Block: Not Supported 00:23:10.820 00:23:10.820 Firmware Slot Information 00:23:10.820 ========================= 00:23:10.820 Active slot: 0 00:23:10.820 00:23:10.820 Asymmetric Namespace Access 00:23:10.820 =========================== 00:23:10.820 Change Count : 0 00:23:10.820 Number of ANA Group Descriptors : 1 00:23:10.820 ANA Group Descriptor : 0 00:23:10.820 ANA Group ID : 1 00:23:10.820 Number of NSID Values : 1 00:23:10.820 Change Count : 0 00:23:10.820 ANA State : 1 00:23:10.820 Namespace Identifier : 1 00:23:10.820 00:23:10.820 Commands Supported and Effects 00:23:10.820 ============================== 00:23:10.820 Admin Commands 00:23:10.820 -------------- 00:23:10.820 Get Log Page (02h): Supported 00:23:10.820 Identify (06h): Supported 00:23:10.820 Abort (08h): Supported 00:23:10.820 Set Features (09h): Supported 00:23:10.820 Get Features (0Ah): Supported 00:23:10.820 Asynchronous Event Request (0Ch): Supported 00:23:10.820 Keep Alive (18h): Supported 00:23:10.820 I/O Commands 00:23:10.820 ------------ 00:23:10.820 Flush (00h): Supported 00:23:10.820 Write (01h): Supported LBA-Change 00:23:10.820 Read (02h): Supported 00:23:10.820 Write Zeroes (08h): Supported LBA-Change 00:23:10.820 Dataset Management (09h): Supported 00:23:10.820 00:23:10.820 Error Log 00:23:10.820 ========= 00:23:10.820 Entry: 0 00:23:10.820 Error Count: 0x3 00:23:10.820 Submission Queue Id: 0x0 00:23:10.820 Command Id: 0x5 00:23:10.820 Phase Bit: 0 00:23:10.820 Status Code: 0x2 00:23:10.820 Status Code Type: 0x0 00:23:10.820 Do Not Retry: 1 00:23:10.820 Error Location: 0x28 00:23:10.820 LBA: 0x0 00:23:10.820 Namespace: 0x0 00:23:10.820 Vendor Log Page: 0x0 00:23:10.820 ----------- 00:23:10.820 Entry: 1 00:23:10.820 Error Count: 0x2 00:23:10.820 Submission Queue Id: 0x0 00:23:10.820 Command Id: 0x5 00:23:10.820 Phase Bit: 0 00:23:10.820 Status Code: 0x2 00:23:10.820 Status Code Type: 0x0 00:23:10.820 Do Not Retry: 1 00:23:10.820 Error Location: 0x28 00:23:10.820 LBA: 0x0 00:23:10.820 Namespace: 0x0 00:23:10.820 Vendor Log Page: 0x0 00:23:10.820 ----------- 00:23:10.820 Entry: 2 00:23:10.820 Error Count: 0x1 00:23:10.820 Submission Queue Id: 0x0 00:23:10.820 Command Id: 0x4 00:23:10.820 Phase Bit: 0 00:23:10.820 Status Code: 0x2 00:23:10.820 Status Code Type: 0x0 00:23:10.820 Do Not Retry: 1 00:23:10.820 Error Location: 0x28 00:23:10.820 LBA: 0x0 00:23:10.820 Namespace: 0x0 00:23:10.820 Vendor Log Page: 0x0 00:23:10.820 00:23:10.820 Number of Queues 00:23:10.820 ================ 00:23:10.820 Number of I/O Submission Queues: 128 00:23:10.820 Number of I/O Completion Queues: 128 00:23:10.820 00:23:10.820 ZNS Specific Controller Data 00:23:10.820 ============================ 00:23:10.820 Zone Append Size Limit: 0 00:23:10.820 00:23:10.820 00:23:10.820 Active Namespaces 00:23:10.820 ================= 00:23:10.820 get_feature(0x05) failed 00:23:10.820 Namespace ID:1 00:23:10.820 Command Set Identifier: NVM (00h) 00:23:10.820 Deallocate: Supported 00:23:10.820 Deallocated/Unwritten Error: Not Supported 00:23:10.820 Deallocated Read Value: Unknown 00:23:10.820 Deallocate in Write Zeroes: Not Supported 00:23:10.820 Deallocated Guard Field: 0xFFFF 00:23:10.820 Flush: Supported 00:23:10.820 Reservation: Not Supported 00:23:10.820 Namespace Sharing Capabilities: Multiple Controllers 00:23:10.820 Size (in LBAs): 1310720 (5GiB) 00:23:10.820 Capacity (in LBAs): 1310720 (5GiB) 00:23:10.820 Utilization (in LBAs): 1310720 (5GiB) 00:23:10.820 UUID: 0281d734-8636-4a4f-9e2e-072b61043d49 00:23:10.820 Thin Provisioning: Not Supported 00:23:10.820 Per-NS Atomic Units: Yes 00:23:10.820 Atomic Boundary Size (Normal): 0 00:23:10.820 Atomic Boundary Size (PFail): 0 00:23:10.820 Atomic Boundary Offset: 0 00:23:10.820 NGUID/EUI64 Never Reused: No 00:23:10.820 ANA group ID: 1 00:23:10.820 Namespace Write Protected: No 00:23:10.820 Number of LBA Formats: 1 00:23:10.820 Current LBA Format: LBA Format #00 00:23:10.820 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:10.820 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:10.820 rmmod nvme_tcp 00:23:10.820 rmmod nvme_fabrics 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.820 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:11.080 18:36:26 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:11.646 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:11.646 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:11.905 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:11.905 00:23:11.905 real 0m2.743s 00:23:11.905 user 0m0.958s 00:23:11.905 sys 0m1.317s 00:23:11.905 18:36:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:11.905 18:36:27 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.905 ************************************ 00:23:11.905 END TEST nvmf_identify_kernel_target 00:23:11.905 ************************************ 00:23:11.905 18:36:27 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:11.905 18:36:27 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:11.905 18:36:27 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:11.905 18:36:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:11.905 ************************************ 00:23:11.905 START TEST nvmf_auth 00:23:11.905 ************************************ 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:11.905 * Looking for test storage... 00:23:11.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.905 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:11.906 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:12.164 Cannot find device "nvmf_tgt_br" 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@155 -- # true 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:12.164 Cannot find device "nvmf_tgt_br2" 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@156 -- # true 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:12.164 Cannot find device "nvmf_tgt_br" 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@158 -- # true 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:12.164 Cannot find device "nvmf_tgt_br2" 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@159 -- # true 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:12.164 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:12.164 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@162 -- # true 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:12.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@163 -- # true 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:12.165 18:36:27 nvmf_tcp.nvmf_auth -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:12.165 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:12.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:12.423 00:23:12.423 --- 10.0.0.2 ping statistics --- 00:23:12.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.423 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:12.423 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:12.423 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:23:12.423 00:23:12.423 --- 10.0.0.3 ping statistics --- 00:23:12.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.423 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:12.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:23:12.423 00:23:12.423 --- 10.0.0.1 ping statistics --- 00:23:12.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.423 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@433 -- # return 0 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:12.423 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=93567 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 93567 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 93567 ']' 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.424 18:36:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=1997ac73209094bb2351d42351640c59 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.AAA 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 1997ac73209094bb2351d42351640c59 0 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 1997ac73209094bb2351d42351640c59 0 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=1997ac73209094bb2351d42351640c59 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:23:13.357 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.AAA 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.AAA 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.AAA 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=369795d4566fa1f364c0e8f31193470c2781ad744174dbd58b888509dd446557 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.yvA 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 369795d4566fa1f364c0e8f31193470c2781ad744174dbd58b888509dd446557 3 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 369795d4566fa1f364c0e8f31193470c2781ad744174dbd58b888509dd446557 3 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=369795d4566fa1f364c0e8f31193470c2781ad744174dbd58b888509dd446557 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.yvA 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.yvA 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.yvA 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=14f8a89fd24898da65a8db8f1ddccb1aea21e64b3f3f290c 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.nft 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 14f8a89fd24898da65a8db8f1ddccb1aea21e64b3f3f290c 0 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 14f8a89fd24898da65a8db8f1ddccb1aea21e64b3f3f290c 0 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=14f8a89fd24898da65a8db8f1ddccb1aea21e64b3f3f290c 00:23:13.616 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.nft 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.nft 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.nft 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=1f11e67dc73670ff4cecc84750ba4b3fffb0f1f8b38bc4f0 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.4e0 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 1f11e67dc73670ff4cecc84750ba4b3fffb0f1f8b38bc4f0 2 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 1f11e67dc73670ff4cecc84750ba4b3fffb0f1f8b38bc4f0 2 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=1f11e67dc73670ff4cecc84750ba4b3fffb0f1f8b38bc4f0 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.4e0 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.4e0 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.4e0 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=ec455ae6822cefdb3f17c400de1658a6 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.PDe 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key ec455ae6822cefdb3f17c400de1658a6 1 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 ec455ae6822cefdb3f17c400de1658a6 1 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=ec455ae6822cefdb3f17c400de1658a6 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:23:13.617 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.PDe 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.PDe 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.PDe 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=2735240dd5a2a2a764befb4a4441d9f7 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.F1X 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 2735240dd5a2a2a764befb4a4441d9f7 1 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 2735240dd5a2a2a764befb4a4441d9f7 1 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=2735240dd5a2a2a764befb4a4441d9f7 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.F1X 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.F1X 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.F1X 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=882bd7c750c122b7322e29cb05c7e6bdc24a79eb676b3d54 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.VXx 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 882bd7c750c122b7322e29cb05c7e6bdc24a79eb676b3d54 2 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 882bd7c750c122b7322e29cb05c7e6bdc24a79eb676b3d54 2 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=882bd7c750c122b7322e29cb05c7e6bdc24a79eb676b3d54 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.VXx 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.VXx 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.VXx 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=274c3b39e59ce73d84d918a7bd8da0ce 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.78B 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 274c3b39e59ce73d84d918a7bd8da0ce 0 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 274c3b39e59ce73d84d918a7bd8da0ce 0 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=274c3b39e59ce73d84d918a7bd8da0ce 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.78B 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.78B 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.78B 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=9cdb9014576967ed95a6f290375e72a0db356d5570de6f2908b33bd25fc29244 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.kY7 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 9cdb9014576967ed95a6f290375e72a0db356d5570de6f2908b33bd25fc29244 3 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 9cdb9014576967ed95a6f290375e72a0db356d5570de6f2908b33bd25fc29244 3 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=9cdb9014576967ed95a6f290375e72a0db356d5570de6f2908b33bd25fc29244 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:23:13.876 18:36:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.kY7 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.kY7 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.kY7 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 93567 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 93567 ']' 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:14.135 18:36:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.AAA 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.yvA ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yvA 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.nft 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.4e0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.4e0 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.PDe 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.F1X ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.F1X 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VXx 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.78B ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.78B 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.kY7 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:14.394 18:36:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:14.653 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:14.912 Waiting for block devices as requested 00:23:14.912 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:14.912 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:15.477 No valid GPT data, bailing 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:23:15.477 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:15.736 No valid GPT data, bailing 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:15.736 No valid GPT data, bailing 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:15.736 No valid GPT data, bailing 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:15.736 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -a 10.0.0.1 -t tcp -s 4420 00:23:15.995 00:23:15.995 Discovery Log Number of Records 2, Generation counter 2 00:23:15.995 =====Discovery Log Entry 0====== 00:23:15.995 trtype: tcp 00:23:15.995 adrfam: ipv4 00:23:15.995 subtype: current discovery subsystem 00:23:15.995 treq: not specified, sq flow control disable supported 00:23:15.995 portid: 1 00:23:15.995 trsvcid: 4420 00:23:15.995 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:15.995 traddr: 10.0.0.1 00:23:15.995 eflags: none 00:23:15.995 sectype: none 00:23:15.995 =====Discovery Log Entry 1====== 00:23:15.995 trtype: tcp 00:23:15.995 adrfam: ipv4 00:23:15.995 subtype: nvme subsystem 00:23:15.995 treq: not specified, sq flow control disable supported 00:23:15.995 portid: 1 00:23:15.995 trsvcid: 4420 00:23:15.995 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:15.995 traddr: 10.0.0.1 00:23:15.995 eflags: none 00:23:15.995 sectype: none 00:23:15.995 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:15.995 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:23:15.995 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:15.995 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:15.995 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.995 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:15.996 nvme0n1 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.996 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.255 18:36:31 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 nvme0n1 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.255 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.564 nvme0n1 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:16.564 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.565 nvme0n1 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:16.565 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:16.823 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 nvme0n1 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.824 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.083 nvme0n1 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.083 18:36:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.084 18:36:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.343 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.602 nvme0n1 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:17.602 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.603 nvme0n1 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:17.603 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.862 nvme0n1 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.862 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.122 nvme0n1 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.122 18:36:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.381 nvme0n1 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.381 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.949 18:36:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.208 nvme0n1 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.208 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.467 nvme0n1 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.467 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.468 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.727 nvme0n1 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.727 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.985 nvme0n1 00:23:19.985 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.985 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.985 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.986 18:36:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:20.244 nvme0n1 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.244 18:36:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.141 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:22.141 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.142 18:36:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 nvme0n1 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:22.400 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.401 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.658 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.916 nvme0n1 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:22.916 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.917 18:36:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.175 nvme0n1 00:23:23.175 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.175 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.175 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:23.175 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.175 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.434 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.693 nvme0n1 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:23.693 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.694 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:24.278 nvme0n1 00:23:24.278 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.278 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.278 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.278 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:24.278 18:36:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:24.278 18:36:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:24.279 18:36:40 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:28.485 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.486 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:28.486 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:28.486 18:36:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:28.486 18:36:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.486 18:36:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.486 18:36:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:28.486 nvme0n1 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.486 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.052 nvme0n1 00:23:29.052 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.052 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.052 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.052 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.052 18:36:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:29.052 18:36:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.311 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.879 nvme0n1 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.879 18:36:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:30.447 nvme0n1 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.447 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.705 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:30.706 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.706 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:30.706 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:30.706 18:36:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:30.706 18:36:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.706 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.706 18:36:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.272 nvme0n1 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.272 nvme0n1 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.272 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.531 nvme0n1 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:31.531 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.532 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.791 nvme0n1 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:31.791 nvme0n1 00:23:31.791 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.050 nvme0n1 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:32.050 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.309 18:36:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.309 nvme0n1 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:32.309 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.310 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 nvme0n1 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 nvme0n1 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:32.569 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.827 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.827 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.827 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.828 nvme0n1 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.828 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.156 nvme0n1 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.156 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.157 18:36:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.416 nvme0n1 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.416 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.675 nvme0n1 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.675 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.934 nvme0n1 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:33.934 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.935 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.194 nvme0n1 00:23:34.194 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.194 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.194 18:36:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:34.194 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.194 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.194 18:36:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.194 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.454 nvme0n1 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.454 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.022 nvme0n1 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.022 18:36:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.281 nvme0n1 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.281 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.848 nvme0n1 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.848 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.849 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.108 nvme0n1 00:23:36.108 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.108 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:36.108 18:36:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.108 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.108 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.108 18:36:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:36.108 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.109 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.676 nvme0n1 00:23:36.676 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.676 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:36.676 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.676 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.676 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.677 18:36:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:37.243 nvme0n1 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.243 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.179 nvme0n1 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.179 18:36:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 nvme0n1 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.746 18:36:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:39.313 nvme0n1 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.313 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.314 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 nvme0n1 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 nvme0n1 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.251 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.510 nvme0n1 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.510 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.511 nvme0n1 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.511 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.771 nvme0n1 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.771 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.031 nvme0n1 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.031 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.032 nvme0n1 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:41.032 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.290 18:36:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.290 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.290 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.290 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.290 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.290 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.291 nvme0n1 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.291 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.550 nvme0n1 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.550 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.551 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.810 nvme0n1 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.810 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.069 nvme0n1 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.069 18:36:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.329 nvme0n1 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.329 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.588 nvme0n1 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.588 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.847 nvme0n1 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:42.847 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.848 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.107 nvme0n1 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.107 18:36:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.366 nvme0n1 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:43.366 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.367 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.626 nvme0n1 00:23:43.626 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.626 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.626 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.626 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:43.626 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.626 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.885 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.144 nvme0n1 00:23:44.144 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.144 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.144 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.144 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.144 18:36:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:44.144 18:36:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.144 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.145 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.712 nvme0n1 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:44.712 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.713 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.971 nvme0n1 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.971 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.972 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.230 18:37:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:45.489 nvme0n1 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTk5N2FjNzMyMDkwOTRiYjIzNTFkNDIzNTE2NDBjNTl+mQa4: 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: ]] 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:MzY5Nzk1ZDQ1NjZmYTFmMzY0YzBlOGYzMTE5MzQ3MGMyNzgxYWQ3NDQxNzRkYmQ1OGI4ODg1MDlkZDQ0NjU1NyYPUxo=: 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:45.489 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.490 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.056 nvme0n1 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.056 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:46.057 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:46.315 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:23:46.315 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:46.315 18:37:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.315 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:46.316 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:46.316 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:46.316 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.316 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.316 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.883 nvme0n1 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:ZWM0NTVhZTY4MjJjZWZkYjNmMTdjNDAwZGUxNjU4YTZQF/7C: 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: ]] 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:MjczNTI0MGRkNWEyYTJhNzY0YmVmYjRhNDQ0MWQ5ZjcmFJll: 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.883 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.884 18:37:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:47.512 nvme0n1 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:23:47.512 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:ODgyYmQ3Yzc1MGMxMjJiNzMyMmUyOWNiMDVjN2U2YmRjMjRhNzllYjY3NmIzZDU0c0/zLA==: 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: ]] 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:Mjc0YzNiMzllNTljZTczZDg0ZDkxOGE3YmQ4ZGEwY2WH+p8g: 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.513 18:37:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:48.448 nvme0n1 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:OWNkYjkwMTQ1NzY5NjdlZDk1YTZmMjkwMzc1ZTcyYTBkYjM1NmQ1NTcwZGU2ZjI5MDhiMzNiZDI1ZmMyOTI0NDvIGbs=: 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.448 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.016 nvme0n1 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:MTRmOGE4OWZkMjQ4OThkYTY1YThkYjhmMWRkY2NiMWFlYTIxZTY0YjNmM2YyOTBjsdTHMg==: 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:MWYxMWU2N2RjNzM2NzBmZjRjZWNjODQ3NTBiYTRiM2ZmZmIwZjFmOGIzOGJjNGYwune7Lw==: 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.016 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.016 2024/05/13 18:37:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:23:49.016 request: 00:23:49.016 { 00:23:49.016 "method": "bdev_nvme_attach_controller", 00:23:49.016 "params": { 00:23:49.016 "name": "nvme0", 00:23:49.016 "trtype": "tcp", 00:23:49.016 "traddr": "10.0.0.1", 00:23:49.016 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:49.017 "adrfam": "ipv4", 00:23:49.017 "trsvcid": "4420", 00:23:49.017 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:23:49.017 } 00:23:49.017 } 00:23:49.017 Got JSON-RPC error response 00:23:49.017 GoRPCClient: error on JSON-RPC call 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.017 2024/05/13 18:37:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:23:49.017 request: 00:23:49.017 { 00:23:49.017 "method": "bdev_nvme_attach_controller", 00:23:49.017 "params": { 00:23:49.017 "name": "nvme0", 00:23:49.017 "trtype": "tcp", 00:23:49.017 "traddr": "10.0.0.1", 00:23:49.017 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:49.017 "adrfam": "ipv4", 00:23:49.017 "trsvcid": "4420", 00:23:49.017 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:49.017 "dhchap_key": "key2" 00:23:49.017 } 00:23:49.017 } 00:23:49.017 Got JSON-RPC error response 00:23:49.017 GoRPCClient: error on JSON-RPC call 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.017 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.276 18:37:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:49.276 2024/05/13 18:37:05 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_ctrlr_key:ckey2 dhchap_key:key1 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:23:49.276 request: 00:23:49.276 { 00:23:49.276 "method": "bdev_nvme_attach_controller", 00:23:49.276 "params": { 00:23:49.276 "name": "nvme0", 00:23:49.276 "trtype": "tcp", 00:23:49.276 "traddr": "10.0.0.1", 00:23:49.276 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:49.276 "adrfam": "ipv4", 00:23:49.276 "trsvcid": "4420", 00:23:49.276 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:49.276 "dhchap_key": "key1", 00:23:49.276 "dhchap_ctrlr_key": "ckey2" 00:23:49.276 } 00:23:49.276 } 00:23:49.276 Got JSON-RPC error response 00:23:49.276 GoRPCClient: error on JSON-RPC call 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:49.276 rmmod nvme_tcp 00:23:49.276 rmmod nvme_fabrics 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 93567 ']' 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 93567 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 93567 ']' 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 93567 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 93567 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:49.276 killing process with pid 93567 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 93567' 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 93567 00:23:49.276 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 93567 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:49.535 18:37:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:50.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:50.467 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:50.467 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:50.467 18:37:06 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.AAA /tmp/spdk.key-null.nft /tmp/spdk.key-sha256.PDe /tmp/spdk.key-sha384.VXx /tmp/spdk.key-sha512.kY7 /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:23:50.467 18:37:06 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:51.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:51.035 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:51.035 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:51.035 00:23:51.035 real 0m39.036s 00:23:51.035 user 0m35.440s 00:23:51.035 sys 0m3.744s 00:23:51.035 18:37:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:51.035 ************************************ 00:23:51.035 END TEST nvmf_auth 00:23:51.035 ************************************ 00:23:51.035 18:37:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:23:51.035 18:37:06 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:23:51.035 18:37:06 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:51.035 18:37:06 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:51.035 18:37:06 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:51.035 18:37:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:51.035 ************************************ 00:23:51.035 START TEST nvmf_digest 00:23:51.035 ************************************ 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:51.035 * Looking for test storage... 00:23:51.035 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:51.035 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@432 -- # nvmf_veth_init 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:51.036 Cannot find device "nvmf_tgt_br" 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@155 -- # true 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:51.036 Cannot find device "nvmf_tgt_br2" 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@156 -- # true 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:51.036 Cannot find device "nvmf_tgt_br" 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@158 -- # true 00:23:51.036 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:51.294 Cannot find device "nvmf_tgt_br2" 00:23:51.294 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@159 -- # true 00:23:51.294 18:37:06 nvmf_tcp.nvmf_digest -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:51.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@162 -- # true 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:51.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@163 -- # true 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:51.294 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:51.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:23:51.553 00:23:51.553 --- 10.0.0.2 ping statistics --- 00:23:51.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.553 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:51.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:51.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:23:51.553 00:23:51.553 --- 10.0.0.3 ping statistics --- 00:23:51.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.553 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:51.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:51.553 00:23:51.553 --- 10.0.0.1 ping statistics --- 00:23:51.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.553 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@433 -- # return 0 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:51.553 ************************************ 00:23:51.553 START TEST nvmf_digest_clean 00:23:51.553 ************************************ 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=95203 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 95203 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95203 ']' 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:51.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:51.553 18:37:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:51.553 [2024-05-13 18:37:07.356884] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:23:51.553 [2024-05-13 18:37:07.356994] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.553 [2024-05-13 18:37:07.495678] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.812 [2024-05-13 18:37:07.616789] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:51.812 [2024-05-13 18:37:07.616847] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:51.812 [2024-05-13 18:37:07.616860] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:51.812 [2024-05-13 18:37:07.616868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:51.812 [2024-05-13 18:37:07.616876] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:51.812 [2024-05-13 18:37:07.616902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.748 null0 00:23:52.748 [2024-05-13 18:37:08.524408] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.748 [2024-05-13 18:37:08.548364] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:52.748 [2024-05-13 18:37:08.548636] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95259 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95259 /var/tmp/bperf.sock 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95259 ']' 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:52.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:52.748 18:37:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:52.748 [2024-05-13 18:37:08.615304] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:23:52.748 [2024-05-13 18:37:08.615415] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95259 ] 00:23:53.007 [2024-05-13 18:37:08.756115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.007 [2024-05-13 18:37:08.890369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.943 18:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:53.943 18:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:53.943 18:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:53.943 18:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:53.943 18:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:54.201 18:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.201 18:37:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:54.460 nvme0n1 00:23:54.460 18:37:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:54.460 18:37:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:54.460 Running I/O for 2 seconds... 00:23:56.989 00:23:56.989 Latency(us) 00:23:56.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.989 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:23:56.989 nvme0n1 : 2.01 18591.59 72.62 0.00 0.00 6875.87 3649.16 14120.03 00:23:56.989 =================================================================================================================== 00:23:56.989 Total : 18591.59 72.62 0.00 0.00 6875.87 3649.16 14120.03 00:23:56.989 0 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:23:56.989 | select(.opcode=="crc32c") 00:23:56.989 | "\(.module_name) \(.executed)"' 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95259 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95259 ']' 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95259 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95259 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:23:56.989 killing process with pid 95259 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95259' 00:23:56.989 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95259 00:23:56.989 Received shutdown signal, test time was about 2.000000 seconds 00:23:56.989 00:23:56.989 Latency(us) 00:23:56.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.990 =================================================================================================================== 00:23:56.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95259 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95345 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95345 /var/tmp/bperf.sock 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95345 ']' 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:56.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:56.990 18:37:12 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:57.247 [2024-05-13 18:37:12.969397] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:23:57.247 [2024-05-13 18:37:12.969519] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95345 ] 00:23:57.247 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:57.247 Zero copy mechanism will not be used. 00:23:57.247 [2024-05-13 18:37:13.106873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.505 [2024-05-13 18:37:13.223035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.070 18:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:58.070 18:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:23:58.070 18:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:58.070 18:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:58.070 18:37:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:58.636 18:37:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.636 18:37:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.895 nvme0n1 00:23:58.895 18:37:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:58.895 18:37:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:58.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:23:58.895 Zero copy mechanism will not be used. 00:23:58.895 Running I/O for 2 seconds... 00:24:01.429 00:24:01.429 Latency(us) 00:24:01.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.429 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:01.429 nvme0n1 : 2.00 8107.42 1013.43 0.00 0.00 1969.51 618.12 4230.05 00:24:01.429 =================================================================================================================== 00:24:01.429 Total : 8107.42 1013.43 0.00 0.00 1969.51 618.12 4230.05 00:24:01.429 0 00:24:01.429 18:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:01.429 18:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:01.429 18:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:01.429 18:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:01.429 18:37:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:01.429 | select(.opcode=="crc32c") 00:24:01.429 | "\(.module_name) \(.executed)"' 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95345 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95345 ']' 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95345 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95345 00:24:01.429 killing process with pid 95345 00:24:01.429 Received shutdown signal, test time was about 2.000000 seconds 00:24:01.429 00:24:01.429 Latency(us) 00:24:01.429 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.429 =================================================================================================================== 00:24:01.429 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:01.429 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95345' 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95345 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95345 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95431 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95431 /var/tmp/bperf.sock 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95431 ']' 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.430 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:01.688 18:37:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.688 [2024-05-13 18:37:17.415299] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:01.688 [2024-05-13 18:37:17.415702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95431 ] 00:24:01.688 [2024-05-13 18:37:17.551379] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.946 [2024-05-13 18:37:17.677502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.513 18:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:02.513 18:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:02.513 18:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:02.513 18:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:02.513 18:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:03.081 18:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:03.081 18:37:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:03.081 nvme0n1 00:24:03.339 18:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:03.339 18:37:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:03.339 Running I/O for 2 seconds... 00:24:05.279 00:24:05.279 Latency(us) 00:24:05.279 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.279 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:05.279 nvme0n1 : 2.01 22229.42 86.83 0.00 0.00 5751.50 2398.02 9055.88 00:24:05.279 =================================================================================================================== 00:24:05.279 Total : 22229.42 86.83 0.00 0.00 5751.50 2398.02 9055.88 00:24:05.279 0 00:24:05.279 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:05.279 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:05.279 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:05.279 | select(.opcode=="crc32c") 00:24:05.279 | "\(.module_name) \(.executed)"' 00:24:05.279 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:05.279 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95431 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95431 ']' 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95431 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95431 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:05.538 killing process with pid 95431 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95431' 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95431 00:24:05.538 Received shutdown signal, test time was about 2.000000 seconds 00:24:05.538 00:24:05.538 Latency(us) 00:24:05.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:05.538 =================================================================================================================== 00:24:05.538 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:05.538 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95431 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=95526 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 95526 /var/tmp/bperf.sock 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 95526 ']' 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:05.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:05.797 18:37:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:06.055 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:06.055 Zero copy mechanism will not be used. 00:24:06.055 [2024-05-13 18:37:21.782643] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:06.055 [2024-05-13 18:37:21.782751] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95526 ] 00:24:06.055 [2024-05-13 18:37:21.922449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.314 [2024-05-13 18:37:22.044891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.880 18:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:06.880 18:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:24:06.880 18:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:06.880 18:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:06.880 18:37:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:07.446 18:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.446 18:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:07.704 nvme0n1 00:24:07.704 18:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:07.704 18:37:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:07.704 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:07.704 Zero copy mechanism will not be used. 00:24:07.704 Running I/O for 2 seconds... 00:24:10.236 00:24:10.236 Latency(us) 00:24:10.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.236 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:10.236 nvme0n1 : 2.00 6487.31 810.91 0.00 0.00 2460.71 1660.74 4051.32 00:24:10.236 =================================================================================================================== 00:24:10.236 Total : 6487.31 810.91 0.00 0.00 2460.71 1660.74 4051.32 00:24:10.236 0 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:10.236 | select(.opcode=="crc32c") 00:24:10.236 | "\(.module_name) \(.executed)"' 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 95526 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95526 ']' 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95526 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95526 00:24:10.236 killing process with pid 95526 00:24:10.236 Received shutdown signal, test time was about 2.000000 seconds 00:24:10.236 00:24:10.236 Latency(us) 00:24:10.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:10.236 =================================================================================================================== 00:24:10.236 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95526' 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95526 00:24:10.236 18:37:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95526 00:24:10.236 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 95203 00:24:10.236 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 95203 ']' 00:24:10.236 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 95203 00:24:10.236 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:24:10.236 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:10.236 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95203 00:24:10.495 killing process with pid 95203 00:24:10.495 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:10.495 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:10.495 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95203' 00:24:10.495 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 95203 00:24:10.495 [2024-05-13 18:37:26.183628] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:10.495 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 95203 00:24:10.754 00:24:10.754 real 0m19.167s 00:24:10.754 user 0m36.731s 00:24:10.754 sys 0m4.609s 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:10.754 ************************************ 00:24:10.754 END TEST nvmf_digest_clean 00:24:10.754 ************************************ 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:10.754 ************************************ 00:24:10.754 START TEST nvmf_digest_error 00:24:10.754 ************************************ 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:10.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=95635 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 95635 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95635 ']' 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:10.754 18:37:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:10.754 [2024-05-13 18:37:26.570400] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:10.754 [2024-05-13 18:37:26.570518] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.013 [2024-05-13 18:37:26.707253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.013 [2024-05-13 18:37:26.823312] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.013 [2024-05-13 18:37:26.823356] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.013 [2024-05-13 18:37:26.823367] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:11.013 [2024-05-13 18:37:26.823376] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:11.013 [2024-05-13 18:37:26.823383] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.013 [2024-05-13 18:37:26.823407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.950 [2024-05-13 18:37:27.656009] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.950 null0 00:24:11.950 [2024-05-13 18:37:27.775731] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.950 [2024-05-13 18:37:27.799685] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:11.950 [2024-05-13 18:37:27.800045] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95679 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95679 /var/tmp/bperf.sock 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95679 ']' 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:11.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:11.950 18:37:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:11.950 [2024-05-13 18:37:27.887716] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:11.950 [2024-05-13 18:37:27.887855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95679 ] 00:24:12.208 [2024-05-13 18:37:28.034481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.505 [2024-05-13 18:37:28.154405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.095 18:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:13.095 18:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:13.095 18:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:13.095 18:37:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:13.354 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:13.354 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.354 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.354 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.354 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.354 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.612 nvme0n1 00:24:13.871 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:13.871 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.871 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.871 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.871 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:13.871 18:37:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.871 Running I/O for 2 seconds... 00:24:13.871 [2024-05-13 18:37:29.694756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:13.871 [2024-05-13 18:37:29.694838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.871 [2024-05-13 18:37:29.694855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.871 [2024-05-13 18:37:29.718741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:13.871 [2024-05-13 18:37:29.718874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.871 [2024-05-13 18:37:29.718900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.871 [2024-05-13 18:37:29.747000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:13.871 [2024-05-13 18:37:29.747154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.871 [2024-05-13 18:37:29.747178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.871 [2024-05-13 18:37:29.772128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:13.871 [2024-05-13 18:37:29.772264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.871 [2024-05-13 18:37:29.772290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:13.871 [2024-05-13 18:37:29.796801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:13.871 [2024-05-13 18:37:29.796906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:13.871 [2024-05-13 18:37:29.796932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.130 [2024-05-13 18:37:29.824128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.824275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.824325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.838842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.838941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.838970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.854754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.854876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.854918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.870697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.870813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.870844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.884085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.884144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.884172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.900704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.900768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.900798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.914448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.914511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.914539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.929905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.929976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.930004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.946418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.946505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.946532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.960287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.960349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.960377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.974011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.974073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.974100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:29.990484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:29.990549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:29.990594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:30.005716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:30.005782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:30.005811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:30.019728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:30.019782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:15434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:30.019805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:30.033766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:30.033827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:30.033851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:30.045871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:30.045924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:30.045947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:30.059793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:30.059852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:30.059876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.131 [2024-05-13 18:37:30.073139] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.131 [2024-05-13 18:37:30.073195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.131 [2024-05-13 18:37:30.073217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.088440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.088495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.088519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.102818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.102870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.102893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.115262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.115314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.115336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.128191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.128243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.128265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.143794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.143853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.143875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.155641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.155691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.155714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.168305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.168358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.168381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.182597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.182647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.182670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.196995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.197047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.197071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.211136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.211189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.211212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.226100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.226153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.226177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.238826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.238879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.238902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.253269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.253323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.253345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.265796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.265849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.265871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.278009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.278060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.278083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.290260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.290313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.290335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.305278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.305331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.305354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.320188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.320239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:4193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.320261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.390 [2024-05-13 18:37:30.331351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.390 [2024-05-13 18:37:30.331402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.390 [2024-05-13 18:37:30.331425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.346179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.346233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.346255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.361428] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.361481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.361504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.374210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.374267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.374290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.388551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.388625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.388648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.401591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.401643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.401666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.415299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.415352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.415375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.429271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.429325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.429347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.441691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.441743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.441765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.453674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.453724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.453747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.469036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.469090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.469113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.483002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.483055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.483080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.497284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.497338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:7049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.497361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.508465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.508520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:4532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.508544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.523102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.523157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.523180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.536904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.536958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.536981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.550173] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.550227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.550251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.563086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.563139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.563162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.574028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.574081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.574104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.650 [2024-05-13 18:37:30.588902] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.650 [2024-05-13 18:37:30.588957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.650 [2024-05-13 18:37:30.588980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.600391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.600445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.600469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.615138] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.615192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.615215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.627316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.627369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.627392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.640948] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.640999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.641021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.656681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.656745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.656768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.669156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.669231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.683710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.683762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.683785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.696119] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.696174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.696198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.709659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.709718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:20780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.709742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.724825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.724880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.724903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.739366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.739421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.739445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.751842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.751895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.751919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.765794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.765848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.765872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.778256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.778310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.778334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.791990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.792043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.792067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.806096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.806150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.806173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.817549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.817619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.817642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.832303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.832361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.832385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.909 [2024-05-13 18:37:30.845731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:14.909 [2024-05-13 18:37:30.845787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:8886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.909 [2024-05-13 18:37:30.845810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.859972] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.860034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.860058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.873192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.873246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.873269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.887369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.887421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.887444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.899919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.899971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.899995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.915031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.915087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.915110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.928706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.928757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.928782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.943492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.943549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.943592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.954760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.954813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.954836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.968257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.968315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:6408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.968338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.982561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.982626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.982650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:30.996305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:30.996360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:30.996383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.010193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.010246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.010268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.023284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.023337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.023360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.036024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.036078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.036100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.049388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.049470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.049493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.062843] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.062930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.062956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.077200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.077290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.077315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.092247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.092303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.092326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.169 [2024-05-13 18:37:31.105470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.169 [2024-05-13 18:37:31.105523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.169 [2024-05-13 18:37:31.105546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.429 [2024-05-13 18:37:31.118451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.429 [2024-05-13 18:37:31.118542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.429 [2024-05-13 18:37:31.118565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.429 [2024-05-13 18:37:31.132159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.429 [2024-05-13 18:37:31.132217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.429 [2024-05-13 18:37:31.132240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.429 [2024-05-13 18:37:31.144102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.429 [2024-05-13 18:37:31.144168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:20022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.429 [2024-05-13 18:37:31.144191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.429 [2024-05-13 18:37:31.158343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.429 [2024-05-13 18:37:31.158400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.429 [2024-05-13 18:37:31.158454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.429 [2024-05-13 18:37:31.173150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.429 [2024-05-13 18:37:31.173219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.429 [2024-05-13 18:37:31.173256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.429 [2024-05-13 18:37:31.187452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.429 [2024-05-13 18:37:31.187512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.187534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.201136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.201200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.201224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.214837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.214887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.214910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.229319] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.229371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.229394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.244342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.244404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.244427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.256513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.256566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.256600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.269633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.269744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.269770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.283538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.283694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.283718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.299165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.299267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.299291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.311921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.311988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.312026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.323400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.323453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:5924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.323476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.336207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.336269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.336309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.350031] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.350133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.350157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.430 [2024-05-13 18:37:31.363465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.430 [2024-05-13 18:37:31.363566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.430 [2024-05-13 18:37:31.363605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.376272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.376359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.376385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.389326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.389395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.389434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.403392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.403450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.403474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.418579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.418708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.418732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.432499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.432613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.432640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.446642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.446742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.446767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.460760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.460821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.460844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.475003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.475051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.475089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.487664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.487761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.487784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.503076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.503162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.503201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.515563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.515725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.515749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.530386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.530484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.530523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.542872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.542920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.542959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.556093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.556176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.556216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.569960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.570046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.570071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.585485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.585586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.585612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.597812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.597870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.597909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.612606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.612666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.612716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.690 [2024-05-13 18:37:31.626818] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.690 [2024-05-13 18:37:31.626922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.690 [2024-05-13 18:37:31.626963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.949 [2024-05-13 18:37:31.641552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.949 [2024-05-13 18:37:31.641645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.949 [2024-05-13 18:37:31.641669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.949 [2024-05-13 18:37:31.654378] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.949 [2024-05-13 18:37:31.654458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.949 [2024-05-13 18:37:31.654483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.949 [2024-05-13 18:37:31.668200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x9a4560) 00:24:15.949 [2024-05-13 18:37:31.668253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.949 [2024-05-13 18:37:31.668294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.949 00:24:15.949 Latency(us) 00:24:15.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.949 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:15.949 nvme0n1 : 2.00 17936.95 70.07 0.00 0.00 7127.85 3798.11 34317.03 00:24:15.949 =================================================================================================================== 00:24:15.949 Total : 17936.95 70.07 0.00 0.00 7127.85 3798.11 34317.03 00:24:15.949 0 00:24:15.949 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:15.949 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:15.949 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:15.949 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:15.949 | .driver_specific 00:24:15.949 | .nvme_error 00:24:15.949 | .status_code 00:24:15.949 | .command_transient_transport_error' 00:24:16.232 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 140 > 0 )) 00:24:16.232 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95679 00:24:16.232 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95679 ']' 00:24:16.232 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95679 00:24:16.232 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:16.232 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:16.232 18:37:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95679 00:24:16.232 killing process with pid 95679 00:24:16.232 Received shutdown signal, test time was about 2.000000 seconds 00:24:16.232 00:24:16.232 Latency(us) 00:24:16.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.232 =================================================================================================================== 00:24:16.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.232 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:16.232 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:16.232 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95679' 00:24:16.232 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95679 00:24:16.232 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95679 00:24:16.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95775 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95775 /var/tmp/bperf.sock 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95775 ']' 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:16.504 18:37:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.504 [2024-05-13 18:37:32.334650] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:16.504 [2024-05-13 18:37:32.334761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95775 ] 00:24:16.504 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:16.504 Zero copy mechanism will not be used. 00:24:16.763 [2024-05-13 18:37:32.474279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.763 [2024-05-13 18:37:32.592757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.697 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:17.697 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:17.697 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.697 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.955 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:17.955 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.955 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.955 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.955 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.955 18:37:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.212 nvme0n1 00:24:18.213 18:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:18.213 18:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.213 18:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.213 18:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.213 18:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:18.213 18:37:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:18.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:18.472 Zero copy mechanism will not be used. 00:24:18.472 Running I/O for 2 seconds... 00:24:18.472 [2024-05-13 18:37:34.178791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.472 [2024-05-13 18:37:34.178876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.472 [2024-05-13 18:37:34.178893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.472 [2024-05-13 18:37:34.183660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.472 [2024-05-13 18:37:34.183704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.472 [2024-05-13 18:37:34.183718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.472 [2024-05-13 18:37:34.188760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.472 [2024-05-13 18:37:34.188802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.472 [2024-05-13 18:37:34.188816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.472 [2024-05-13 18:37:34.193133] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.472 [2024-05-13 18:37:34.193175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.472 [2024-05-13 18:37:34.193200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.197182] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.197225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.197239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.201380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.201422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.201436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.205244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.205288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.205302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.209680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.209722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.209736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.214011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.214053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.214068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.218448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.218507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.218521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.221455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.221496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.221510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.225271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.225313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.225326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.229693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.229734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.229748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.233384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.233425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.233438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.237705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.237745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.237759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.243155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.243198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.243211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.248346] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.248390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.248403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.252493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.252551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.252565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.255389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.255445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.255474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.260445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.260491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.260504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.263975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.264016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.264029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.268433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.268476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.268489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.272248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.272290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.272303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.276762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.276804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.276818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.281317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.281390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.281403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.285659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.285701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.285715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.288771] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.288811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.288824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.293847] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.293892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.293909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.297530] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.297584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.297600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.301694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.301735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.301748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.306170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.306225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.306239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.473 [2024-05-13 18:37:34.309623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.473 [2024-05-13 18:37:34.309664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.473 [2024-05-13 18:37:34.309677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.314053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.314097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.314110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.318683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.318724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.318738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.321756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.321797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.321810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.326176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.326221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.326235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.331077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.331120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.331134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.335751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.335793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.335807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.339271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.339314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.339328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.343709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.343750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.343764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.348251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.348294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.348308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.351871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.351911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.351925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.357214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.357257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.357271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.361892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.361933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.361946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.366127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.366168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.366182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.369864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.369906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.369920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.374155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.374197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.374211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.378638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.378678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.378691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.382246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.382289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.382302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.386040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.386082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.386096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.389520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.389561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.389587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.393967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.394025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.394039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.399282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.399324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.399339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.402926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.402966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.402980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.407514] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.407558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.407583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.474 [2024-05-13 18:37:34.412125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.474 [2024-05-13 18:37:34.412171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.474 [2024-05-13 18:37:34.412186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.416963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.417009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.417023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.420009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.420049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.420063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.424420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.424462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.424475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.428784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.428832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.428846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.432853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.432895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.432907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.436867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.436909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.436923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.441210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.441253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.441266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.445536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.445590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.445605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.449061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.449103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.449116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.453286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.453327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.453341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.457463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.457505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.457518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.461730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.461773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.461786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.465660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.465701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.465715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.469676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.469716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.469730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.473587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.473626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.473639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.477590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.477629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.477642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.481833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.481890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.481904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.486026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.486066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.486080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.489775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.489815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.735 [2024-05-13 18:37:34.489829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.735 [2024-05-13 18:37:34.493536] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.735 [2024-05-13 18:37:34.493585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.493601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.497698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.497756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.497770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.501913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.501956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.501970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.505880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.505920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.505933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.509822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.509864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.509878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.513678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.513718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.513731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.517844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.517885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.517898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.521651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.521704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.521718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.526027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.526068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.526081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.530150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.530191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.530204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.534299] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.534340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.534354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.538460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.538519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.538532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.542258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.542300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.542314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.547172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.547214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.547227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.550705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.550745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.550759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.555326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.555382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.555413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.560503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.560545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.560559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.565758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.565800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.565815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.569918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.569959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.569972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.572739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.572777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.572791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.578019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.578061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.578075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.583128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.583168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.583183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.586700] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.586739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.586753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.591076] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.591120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.591134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.595222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.595264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.736 [2024-05-13 18:37:34.595277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.736 [2024-05-13 18:37:34.599044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.736 [2024-05-13 18:37:34.599086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.599099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.603534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.603586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.603601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.606900] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.606941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.606955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.611348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.611389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.611404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.616227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.616268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.616282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.620189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.620231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.620245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.624124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.624166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.624180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.628086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.628128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.628141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.632504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.632545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.632558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.636028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.636069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.636084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.640392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.640437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.640452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.645032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.645075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.645089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.648838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.648881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.648895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.653304] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.653346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.653360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.657641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.657682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.657695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.662108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.662154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.662168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.665758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.665799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.665813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.669865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.669909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.669923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.737 [2024-05-13 18:37:34.674499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.737 [2024-05-13 18:37:34.674540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.737 [2024-05-13 18:37:34.674554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.997 [2024-05-13 18:37:34.678555] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.997 [2024-05-13 18:37:34.678613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.997 [2024-05-13 18:37:34.678627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.997 [2024-05-13 18:37:34.682643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.997 [2024-05-13 18:37:34.682685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.997 [2024-05-13 18:37:34.682698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.997 [2024-05-13 18:37:34.687335] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.997 [2024-05-13 18:37:34.687376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.997 [2024-05-13 18:37:34.687390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.997 [2024-05-13 18:37:34.692736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.997 [2024-05-13 18:37:34.692778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.997 [2024-05-13 18:37:34.692792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.997 [2024-05-13 18:37:34.696142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.997 [2024-05-13 18:37:34.696183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.997 [2024-05-13 18:37:34.696196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.997 [2024-05-13 18:37:34.700765] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.700807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.700820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.705610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.705652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.705666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.709271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.709312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.709325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.713923] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.713965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.713979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.718878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.718920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.718933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.722825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.722866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.722881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.727027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.727070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.727083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.730713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.730756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.730770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.734837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.734881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.734895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.738564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.738617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.738631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.742904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.742945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.742959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.747633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.747674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.747687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.751971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.752018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.752032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.755809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.755853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.755866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.759982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.760025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.760039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.764338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.764380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.764394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.768284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.768326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.768339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.773588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.773632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.773646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.778381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.778423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.778437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.782833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.782876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.782890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.785761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.785800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.785814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.790794] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.790836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.790849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.794116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.794157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.794170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.799149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.799190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.799203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.802796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.802837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.802850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.806768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.806809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.806822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.810965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.998 [2024-05-13 18:37:34.811006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.998 [2024-05-13 18:37:34.811020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.998 [2024-05-13 18:37:34.814651] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.814691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.814705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.818212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.818253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.818267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.822541] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.822613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.822637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.826156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.826208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.826221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.830157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.830199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.830212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.834742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.834782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.834796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.838717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.838775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.838788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.842841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.842885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.842898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.846762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.846812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.846827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.851085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.851134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.851148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.856248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.856304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.856319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.860912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.860963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.860978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.864290] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.864338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.864366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.868762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.868814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.868840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.872728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.872781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.872795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.876958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.877008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.877022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.881071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.881117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.881132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.885215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.885258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.885272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.889260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.889303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.889317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.893884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.893926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.893939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.898297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.898339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.898353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.902531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.902586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.902602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.906997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.907049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.907063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.911561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.911629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.911644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.916233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.916280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.916294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.919339] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.919383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.919397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.924445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.924523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.924537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.929567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:18.999 [2024-05-13 18:37:34.929620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.999 [2024-05-13 18:37:34.929635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.999 [2024-05-13 18:37:34.933308] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.000 [2024-05-13 18:37:34.933379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.000 [2024-05-13 18:37:34.933409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.000 [2024-05-13 18:37:34.937624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.000 [2024-05-13 18:37:34.937666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.000 [2024-05-13 18:37:34.937679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.942550] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.942601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.259 [2024-05-13 18:37:34.942616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.945878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.945919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.259 [2024-05-13 18:37:34.945933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.950227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.950268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.259 [2024-05-13 18:37:34.950283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.954838] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.954880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.259 [2024-05-13 18:37:34.954894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.959399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.959444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.259 [2024-05-13 18:37:34.959458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.964492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.964560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.259 [2024-05-13 18:37:34.964605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.967379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.967419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.259 [2024-05-13 18:37:34.967432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.259 [2024-05-13 18:37:34.972373] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.259 [2024-05-13 18:37:34.972421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:34.972435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:34.977609] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:34.977657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:34.977671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:34.981194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:34.981239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:34.981253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:34.985456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:34.985499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:34.985513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:34.990431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:34.990473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:34.990486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:34.994178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:34.994219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:34.994234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:34.999142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:34.999190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:34.999203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.004326] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.004381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.004396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.009803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.009848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.009862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.013522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.013565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.013593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.017691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.017730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.017744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.021861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.021904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.021918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.026264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.026307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.026320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.030473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.030515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.030528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.034007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.034048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.034061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.038732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.038774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.038787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.042678] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.042719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.042733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.046796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.046836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.046865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.051477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.051518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.051532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.055257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.055299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.055312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.059055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.059097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.059120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.063132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.063190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.063204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.068175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.068236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.068250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.071552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.071626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.071641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.075976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.076035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.076050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.080498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.080555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.080585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.083993] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.084042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.084057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.088547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.260 [2024-05-13 18:37:35.088625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.260 [2024-05-13 18:37:35.088641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.260 [2024-05-13 18:37:35.093125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.093187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.093203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.097101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.097156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.097172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.101497] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.101552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.101567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.105619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.105678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.105703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.109739] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.109790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.109804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.113137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.113190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.113204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.118058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.118101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.118115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.123072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.123118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.123132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.126003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.126052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.126065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.131382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.131427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.131440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.136505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.136564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.136597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.139486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.139528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.139541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.144323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.144378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.144393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.148109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.148162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.148176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.151792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.151855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.151869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.156233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.156291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.156306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.159748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.159802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.159816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.164025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.164080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.164095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.168093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.168151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.168165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.171953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.172014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.172029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.176634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.176709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.176725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.181022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.181074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.181088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.184683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.184737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.184751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.189411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.189462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.189477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.194079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.194135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.194149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.261 [2024-05-13 18:37:35.197797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.261 [2024-05-13 18:37:35.197839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.261 [2024-05-13 18:37:35.197853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.201894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.201939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.201952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.205856] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.205917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.205931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.210413] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.210468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.210483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.214279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.214328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.214343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.218344] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.218392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.218407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.222677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.222728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.222743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.227328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.227378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.227392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.231830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.231884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.231898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.235998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.236052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.236067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.240816] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.240874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.240889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.244500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.244558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.244587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.249019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.249086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.249100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.253324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.253370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.253384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.256844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.256887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.256900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.261187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.261232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.261246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.265787] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.265834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.265850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.270250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.270293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.270307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.273357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.273398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.273412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.277124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.522 [2024-05-13 18:37:35.277175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.522 [2024-05-13 18:37:35.277189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.522 [2024-05-13 18:37:35.281676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.281738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.281753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.285800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.285852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.285867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.290109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.290162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.290177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.294783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.294838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.294852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.298472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.298521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.298536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.303108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.303162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.303176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.308104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.308157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.308171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.311827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.311876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.311890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.316340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.316390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.316405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.319835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.319882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.319897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.324230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.324277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.324291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.327808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.327850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.332613] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.332663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.332677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.335689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.335732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.335746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.340451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.340496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.340510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.344356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.344400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.344413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.349103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.349162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.349176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.353090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.353162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.357737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.357797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.357812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.361605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.361655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.361669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.365671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.365725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.365739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.370293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.370358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.370372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.374849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.374912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.374927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.377710] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.377755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.377769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.382667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.382740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.382755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.387487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.387545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.523 [2024-05-13 18:37:35.387559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.523 [2024-05-13 18:37:35.392623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.523 [2024-05-13 18:37:35.392680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.392706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.395459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.395499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.395513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.400628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.400669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.400683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.403828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.403867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.403880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.408316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.408357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.408370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.413646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.413686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.413700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.418002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.418053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.418068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.421111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.421156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.421171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.425975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.426031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.426046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.430557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.430629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.430644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.435742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.435806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.435820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.440331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.440388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.440402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.443197] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.443241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.443255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.448318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.448379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.448393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.453427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.453494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.453509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.457268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.457321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.457337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.524 [2024-05-13 18:37:35.461546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.524 [2024-05-13 18:37:35.461622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.524 [2024-05-13 18:37:35.461638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.465943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.465993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.466007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.470136] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.470185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.470199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.474423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.474473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.474486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.478799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.478841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.478855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.483124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.483168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.483182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.486799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.486843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.486857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.490684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.490737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.490751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.495374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.495435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.495449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.500503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.500565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.500596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.505763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.505818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.505833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.508908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.508954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.508969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.513404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.513460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.814 [2024-05-13 18:37:35.513475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.814 [2024-05-13 18:37:35.518796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.814 [2024-05-13 18:37:35.518865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.518881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.522089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.522147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.522161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.526468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.526527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.526541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.531184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.531241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.531255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.534767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.534819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.534834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.539689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.539736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.539750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.544101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.544152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.544166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.548021] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.548077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.548091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.552153] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.552201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.552216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.556836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.556899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.556914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.560921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.560980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.560994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.565813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.565872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.565888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.569756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.569807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.569822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.573899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.573956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.573971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.577215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.577264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.577278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.581478] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.581532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.581546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.585748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.585800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.585815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.590489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.590545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.590560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.594047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.594099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.594113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.598731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.598783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.598797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.602980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.603025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.603040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.606374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.606416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.606430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.611002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.611045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.611060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.614588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.614629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.614642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.618747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.618790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.815 [2024-05-13 18:37:35.618803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.815 [2024-05-13 18:37:35.623482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.815 [2024-05-13 18:37:35.623527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.623540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.627500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.627543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.627557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.631630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.631672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.631686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.636143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.636186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.636200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.640310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.640351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.640365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.644016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.644063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.644076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.649158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.649202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.649216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.653600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.653640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.653654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.657286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.657330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.657344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.661963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.662006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.662020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.666749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.666789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.666803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.671535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.671592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.671608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.675205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.675248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.675261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.679233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.679275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.679288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.684051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.684093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.684106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.687880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.687921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.687950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.692615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.692651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.692665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.697408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.697451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.697465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.701751] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.701793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.701807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.705281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.705322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.705336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.709897] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.709939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.709953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.713785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.713826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.713839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.717901] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.717944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.717958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.722434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.722478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.722491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.727851] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.727894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.727908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.733007] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.733049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.733063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.735815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.816 [2024-05-13 18:37:35.735854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.816 [2024-05-13 18:37:35.735867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.816 [2024-05-13 18:37:35.740789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.817 [2024-05-13 18:37:35.740831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.817 [2024-05-13 18:37:35.740844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.817 [2024-05-13 18:37:35.744234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.817 [2024-05-13 18:37:35.744277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.817 [2024-05-13 18:37:35.744307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.817 [2024-05-13 18:37:35.748524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:19.817 [2024-05-13 18:37:35.748566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.817 [2024-05-13 18:37:35.748593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.753178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.753221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.753235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.758348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.758391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.758404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.762099] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.762147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.762161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.766858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.766908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.766923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.771488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.771544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.776135] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.776185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.776199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.780636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.780683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.780711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.783661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.783704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.783718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.788199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.788254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.788268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.792734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.792785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.792800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.797379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.797431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.797446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.801301] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.801351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.801366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.805891] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.805943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.805957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.810761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.810812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.810827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.814179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.814222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.814235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.818343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.818416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.818429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.822563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.822615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.822629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.826769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.077 [2024-05-13 18:37:35.826811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.077 [2024-05-13 18:37:35.826825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.077 [2024-05-13 18:37:35.831060] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.831100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.831114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.835627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.835683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.835696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.839604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.839660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.839674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.844015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.844058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.844071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.848314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.848359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.848373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.852126] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.852169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.852184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.856191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.856236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.856250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.860412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.860472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.860486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.864654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.864704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.864719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.868977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.869017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.869031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.872416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.872455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.872485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.877249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.877290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.877303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.881245] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.881286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.881299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.885198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.885240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.885254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.889463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.889505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.889518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.894091] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.894143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.894158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.899214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.899266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.899280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.902075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.902118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.902131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.907118] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.907163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.907177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.911318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.911361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.911374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.915424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.915466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.915496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.919592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.919646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.919660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.924372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.924413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.924443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.927860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.927902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.927916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.932486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.932528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.932542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.937085] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.937126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.937140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.941171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.941215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.941238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.945198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.078 [2024-05-13 18:37:35.945243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.078 [2024-05-13 18:37:35.945257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.078 [2024-05-13 18:37:35.949389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.949432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.949445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.952779] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.952819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.952832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.956884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.956932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.956946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.961941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.961993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.962009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.965392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.965436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.965450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.969640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.969681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.969695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.974368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.974416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.974431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.977768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.977815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.977829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.983046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.983107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.983121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.988392] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.988442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.988457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.992130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.992174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.992188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:35.996904] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:35.996953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:35.996967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:36.001857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:36.001900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:36.001915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:36.006089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:36.006139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:36.006154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:36.010207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:36.010250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:36.010264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:36.014470] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:36.014526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:36.014539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.079 [2024-05-13 18:37:36.019271] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.079 [2024-05-13 18:37:36.019314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.079 [2024-05-13 18:37:36.019328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.338 [2024-05-13 18:37:36.022616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.338 [2024-05-13 18:37:36.022659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.338 [2024-05-13 18:37:36.022672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.338 [2024-05-13 18:37:36.026815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.338 [2024-05-13 18:37:36.026856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.338 [2024-05-13 18:37:36.026877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.338 [2024-05-13 18:37:36.032019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.338 [2024-05-13 18:37:36.032063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.338 [2024-05-13 18:37:36.032077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.338 [2024-05-13 18:37:36.035324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.338 [2024-05-13 18:37:36.035365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.338 [2024-05-13 18:37:36.035378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.338 [2024-05-13 18:37:36.039985] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.338 [2024-05-13 18:37:36.040030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.338 [2024-05-13 18:37:36.040044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.338 [2024-05-13 18:37:36.044947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.338 [2024-05-13 18:37:36.044992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.045007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.049941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.049992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.050007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.053438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.053484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.053498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.057836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.057887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.057901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.062636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.062698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.062712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.067429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.067475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.067489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.071175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.071219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.071233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.075154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.075201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.075216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.079798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.079848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.079862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.084562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.084619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.084633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.089406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.089455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.089469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.092195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.092238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.092252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.096979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.097026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.097040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.101437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.101483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.101497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.105048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.105089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.105102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.110403] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.110446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.110461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.115354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.115397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.115411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.118958] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.119000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.119014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.123012] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.123055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.123070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.127020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.127062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.127075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.131447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.131495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.131509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.135730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.135774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.135789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.139146] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.139191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.139205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.143442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.143503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.148016] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.148062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.148076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.151595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.151639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.151652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.155767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.155813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.155826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.160640] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.160686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.339 [2024-05-13 18:37:36.160713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.339 [2024-05-13 18:37:36.164768] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x10c1af0) 00:24:20.339 [2024-05-13 18:37:36.164817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.340 [2024-05-13 18:37:36.164834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.340 00:24:20.340 Latency(us) 00:24:20.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.340 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:20.340 nvme0n1 : 2.00 7258.18 907.27 0.00 0.00 2200.27 595.78 11021.96 00:24:20.340 =================================================================================================================== 00:24:20.340 Total : 7258.18 907.27 0.00 0.00 2200.27 595.78 11021.96 00:24:20.340 0 00:24:20.340 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:20.340 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:20.340 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:20.340 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:20.340 | .driver_specific 00:24:20.340 | .nvme_error 00:24:20.340 | .status_code 00:24:20.340 | .command_transient_transport_error' 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 468 > 0 )) 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95775 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95775 ']' 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95775 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95775 00:24:20.598 killing process with pid 95775 00:24:20.598 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.598 00:24:20.598 Latency(us) 00:24:20.598 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.598 =================================================================================================================== 00:24:20.598 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95775' 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95775 00:24:20.598 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95775 00:24:21.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95860 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95860 /var/tmp/bperf.sock 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95860 ']' 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:21.167 18:37:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.167 [2024-05-13 18:37:36.863197] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:21.167 [2024-05-13 18:37:36.863291] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95860 ] 00:24:21.167 [2024-05-13 18:37:37.001642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.425 [2024-05-13 18:37:37.121733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.990 18:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:21.990 18:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:21.990 18:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.990 18:37:37 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:22.248 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:22.248 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.248 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.248 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.248 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.248 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.815 nvme0n1 00:24:22.815 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:22.815 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.815 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.815 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.815 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:22.815 18:37:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.815 Running I/O for 2 seconds... 00:24:22.815 [2024-05-13 18:37:38.645282] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ee5c8 00:24:22.815 [2024-05-13 18:37:38.646194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:25279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.646237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.656234] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e2c28 00:24:22.815 [2024-05-13 18:37:38.656974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.657019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.668217] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e3d08 00:24:22.815 [2024-05-13 18:37:38.669270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:24076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.669314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.682136] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6300 00:24:22.815 [2024-05-13 18:37:38.683792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.683830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.690364] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ef6a8 00:24:22.815 [2024-05-13 18:37:38.691099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.691138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.704179] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7da8 00:24:22.815 [2024-05-13 18:37:38.705588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.705621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.715063] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fcdd0 00:24:22.815 [2024-05-13 18:37:38.716264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.716306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.726297] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fb048 00:24:22.815 [2024-05-13 18:37:38.727395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.815 [2024-05-13 18:37:38.727433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:22.815 [2024-05-13 18:37:38.740120] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ec408 00:24:22.816 [2024-05-13 18:37:38.741861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.816 [2024-05-13 18:37:38.741898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:22.816 [2024-05-13 18:37:38.748317] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ed4e8 00:24:22.816 [2024-05-13 18:37:38.749151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.816 [2024-05-13 18:37:38.749188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.762242] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190df118 00:24:23.075 [2024-05-13 18:37:38.763544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.763598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.772805] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f8e88 00:24:23.075 [2024-05-13 18:37:38.773924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.773964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.785690] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f5be8 00:24:23.075 [2024-05-13 18:37:38.787280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.787317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.793898] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f1ca0 00:24:23.075 [2024-05-13 18:37:38.794585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.794620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.807698] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eaef0 00:24:23.075 [2024-05-13 18:37:38.809023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.809060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.819319] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6b70 00:24:23.075 [2024-05-13 18:37:38.820187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.820227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.830257] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e95a0 00:24:23.075 [2024-05-13 18:37:38.831016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.831055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.840549] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f2510 00:24:23.075 [2024-05-13 18:37:38.841444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.841480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.854377] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6fa8 00:24:23.075 [2024-05-13 18:37:38.855740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.855781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.863452] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7970 00:24:23.075 [2024-05-13 18:37:38.864198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.864237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.875454] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6738 00:24:23.075 [2024-05-13 18:37:38.876193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.876232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.888356] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f0350 00:24:23.075 [2024-05-13 18:37:38.889287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.889327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.900011] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7538 00:24:23.075 [2024-05-13 18:37:38.901378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.901416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.910683] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f31b8 00:24:23.075 [2024-05-13 18:37:38.911862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.911902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.921896] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ecc78 00:24:23.075 [2024-05-13 18:37:38.922970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.923008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.935684] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e4140 00:24:23.075 [2024-05-13 18:37:38.937411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.937449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.943873] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f8e88 00:24:23.075 [2024-05-13 18:37:38.944669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.944714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.957643] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ebb98 00:24:23.075 [2024-05-13 18:37:38.959083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.959122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.969208] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e88f8 00:24:23.075 [2024-05-13 18:37:38.970637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.970673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.980031] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e5220 00:24:23.075 [2024-05-13 18:37:38.981455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:9011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.075 [2024-05-13 18:37:38.981511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.075 [2024-05-13 18:37:38.991284] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e3498 00:24:23.075 [2024-05-13 18:37:38.992429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.076 [2024-05-13 18:37:38.992466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.076 [2024-05-13 18:37:39.002376] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f20d8 00:24:23.076 [2024-05-13 18:37:39.003219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.076 [2024-05-13 18:37:39.003258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.076 [2024-05-13 18:37:39.013059] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e1f80 00:24:23.076 [2024-05-13 18:37:39.014895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.076 [2024-05-13 18:37:39.014934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.025763] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f20d8 00:24:23.336 [2024-05-13 18:37:39.027194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.027231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.036409] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f4f40 00:24:23.336 [2024-05-13 18:37:39.037718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.037757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.047610] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e3498 00:24:23.336 [2024-05-13 18:37:39.048784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:7530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.048822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.061463] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e3d08 00:24:23.336 [2024-05-13 18:37:39.063276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.063315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.069650] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f3a28 00:24:23.336 [2024-05-13 18:37:39.070378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:24813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.070416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.084202] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190de038 00:24:23.336 [2024-05-13 18:37:39.085897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.085935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.095405] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f20d8 00:24:23.336 [2024-05-13 18:37:39.097105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.097143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.103671] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f8e88 00:24:23.336 [2024-05-13 18:37:39.104420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.104457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.117594] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6300 00:24:23.336 [2024-05-13 18:37:39.118961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.118999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.128278] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e95a0 00:24:23.336 [2024-05-13 18:37:39.129487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.129526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.139484] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e01f8 00:24:23.336 [2024-05-13 18:37:39.140595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.140630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.151006] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e0a68 00:24:23.336 [2024-05-13 18:37:39.151658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.151696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.164132] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ebfd0 00:24:23.336 [2024-05-13 18:37:39.165557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.165604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.175428] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fda78 00:24:23.336 [2024-05-13 18:37:39.177010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.177048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.186144] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ed4e8 00:24:23.336 [2024-05-13 18:37:39.187501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.187539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.197392] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e7818 00:24:23.336 [2024-05-13 18:37:39.198682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.198718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.208059] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7100 00:24:23.336 [2024-05-13 18:37:39.209193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.209233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.219282] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f6458 00:24:23.336 [2024-05-13 18:37:39.220276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.220313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.233105] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190efae0 00:24:23.336 [2024-05-13 18:37:39.234742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.234779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.241270] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ecc78 00:24:23.336 [2024-05-13 18:37:39.241994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.242030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.255028] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eb328 00:24:23.336 [2024-05-13 18:37:39.256389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.256427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.266897] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ebb98 00:24:23.336 [2024-05-13 18:37:39.268400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.268437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:23.336 [2024-05-13 18:37:39.277589] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e5a90 00:24:23.336 [2024-05-13 18:37:39.278926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.336 [2024-05-13 18:37:39.278965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.289002] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ee5c8 00:24:23.596 [2024-05-13 18:37:39.290247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.290284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.302883] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190de8a8 00:24:23.596 [2024-05-13 18:37:39.304771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.304809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.310996] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e4578 00:24:23.596 [2024-05-13 18:37:39.311796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.311835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.325380] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fa7d8 00:24:23.596 [2024-05-13 18:37:39.327121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.327160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.336484] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f96f8 00:24:23.596 [2024-05-13 18:37:39.338242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.338281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.344759] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f2510 00:24:23.596 [2024-05-13 18:37:39.345563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.345610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.358580] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e5ec8 00:24:23.596 [2024-05-13 18:37:39.360019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:4970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.360057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.368742] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e8088 00:24:23.596 [2024-05-13 18:37:39.369732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.369770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.380316] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7538 00:24:23.596 [2024-05-13 18:37:39.381466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.381504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.392284] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e3498 00:24:23.596 [2024-05-13 18:37:39.393048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.393087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.404343] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eee38 00:24:23.596 [2024-05-13 18:37:39.405213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.405252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.415256] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fc128 00:24:23.596 [2024-05-13 18:37:39.415997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:8421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.416036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.429296] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e4de8 00:24:23.596 [2024-05-13 18:37:39.431059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19122 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.431094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.437509] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fbcf0 00:24:23.596 [2024-05-13 18:37:39.438355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.438392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.451290] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f4b08 00:24:23.596 [2024-05-13 18:37:39.452614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.452647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.460361] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190feb58 00:24:23.596 [2024-05-13 18:37:39.461080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.461117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.474699] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190de038 00:24:23.596 [2024-05-13 18:37:39.476430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.476468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.485829] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f3a28 00:24:23.596 [2024-05-13 18:37:39.487296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.487335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.496905] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e23b8 00:24:23.596 [2024-05-13 18:37:39.498239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.596 [2024-05-13 18:37:39.498277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:23.596 [2024-05-13 18:37:39.507750] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f2d80 00:24:23.597 [2024-05-13 18:37:39.509058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.597 [2024-05-13 18:37:39.509098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:23.597 [2024-05-13 18:37:39.519269] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e88f8 00:24:23.597 [2024-05-13 18:37:39.520542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.597 [2024-05-13 18:37:39.520600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.597 [2024-05-13 18:37:39.531544] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f0788 00:24:23.597 [2024-05-13 18:37:39.532762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.597 [2024-05-13 18:37:39.532800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.855 [2024-05-13 18:37:39.542545] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190df550 00:24:23.855 [2024-05-13 18:37:39.543656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.543696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.553583] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f0788 00:24:23.856 [2024-05-13 18:37:39.554534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.554598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.567233] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e4578 00:24:23.856 [2024-05-13 18:37:39.568879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.568930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.578260] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e5220 00:24:23.856 [2024-05-13 18:37:39.579673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.579718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.589855] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190feb58 00:24:23.856 [2024-05-13 18:37:39.591179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:4499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.591221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.604206] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ebfd0 00:24:23.856 [2024-05-13 18:37:39.606192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.606250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.613490] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ef270 00:24:23.856 [2024-05-13 18:37:39.614814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.614872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.625219] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e88f8 00:24:23.856 [2024-05-13 18:37:39.626036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.626079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.636144] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e8d30 00:24:23.856 [2024-05-13 18:37:39.636857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.636897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.649580] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fb8b8 00:24:23.856 [2024-05-13 18:37:39.651097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.651153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.661242] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e9e10 00:24:23.856 [2024-05-13 18:37:39.662697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.662766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.673564] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ebfd0 00:24:23.856 [2024-05-13 18:37:39.674837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.674885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.686966] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ebfd0 00:24:23.856 [2024-05-13 18:37:39.687949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.688014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.700429] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ed920 00:24:23.856 [2024-05-13 18:37:39.701758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.701802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.712899] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fe720 00:24:23.856 [2024-05-13 18:37:39.713866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.713911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.725722] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f0ff8 00:24:23.856 [2024-05-13 18:37:39.726904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.726954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.738824] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e95a0 00:24:23.856 [2024-05-13 18:37:39.740213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.740265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.750730] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190df988 00:24:23.856 [2024-05-13 18:37:39.752109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.752150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.762047] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7100 00:24:23.856 [2024-05-13 18:37:39.763216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.763254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.774154] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ef270 00:24:23.856 [2024-05-13 18:37:39.775270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.775311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:23.856 [2024-05-13 18:37:39.787213] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ddc00 00:24:23.856 [2024-05-13 18:37:39.788027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.856 [2024-05-13 18:37:39.788076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.115 [2024-05-13 18:37:39.799904] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eb760 00:24:24.115 [2024-05-13 18:37:39.800616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.115 [2024-05-13 18:37:39.800651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:24.115 [2024-05-13 18:37:39.812074] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f57b0 00:24:24.115 [2024-05-13 18:37:39.812800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.115 [2024-05-13 18:37:39.812842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:24.115 [2024-05-13 18:37:39.826079] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fac10 00:24:24.115 [2024-05-13 18:37:39.827280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.115 [2024-05-13 18:37:39.827321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:24.115 [2024-05-13 18:37:39.837054] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7da8 00:24:24.115 [2024-05-13 18:37:39.838077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.838116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.851052] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ddc00 00:24:24.116 [2024-05-13 18:37:39.852911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.852965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.859782] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eea00 00:24:24.116 [2024-05-13 18:37:39.860683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.860739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.874233] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f0350 00:24:24.116 [2024-05-13 18:37:39.875799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.875844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.885336] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ec408 00:24:24.116 [2024-05-13 18:37:39.886840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:19588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.886897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.897977] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fe720 00:24:24.116 [2024-05-13 18:37:39.899334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.899391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.912489] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f8e88 00:24:24.116 [2024-05-13 18:37:39.914412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.914452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.920779] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e01f8 00:24:24.116 [2024-05-13 18:37:39.921748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.921783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.934647] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fac10 00:24:24.116 [2024-05-13 18:37:39.936250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.936287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.946304] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190feb58 00:24:24.116 [2024-05-13 18:37:39.947902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.947935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.957223] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f3a28 00:24:24.116 [2024-05-13 18:37:39.958704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.958741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.967525] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eea00 00:24:24.116 [2024-05-13 18:37:39.968770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.968808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.978783] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ee190 00:24:24.116 [2024-05-13 18:37:39.979926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.979966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:39.990409] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f57b0 00:24:24.116 [2024-05-13 18:37:39.991554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:39.991623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:40.001376] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f4298 00:24:24.116 [2024-05-13 18:37:40.002396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:40.002434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:40.015513] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6fa8 00:24:24.116 [2024-05-13 18:37:40.017296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:40.017334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:40.023737] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e3060 00:24:24.116 [2024-05-13 18:37:40.024594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:40.024628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:40.035432] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190de038 00:24:24.116 [2024-05-13 18:37:40.036279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:40.036315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:24.116 [2024-05-13 18:37:40.046398] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f8a50 00:24:24.116 [2024-05-13 18:37:40.047117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.116 [2024-05-13 18:37:40.047163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.060464] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e1f80 00:24:24.376 [2024-05-13 18:37:40.061970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.062005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.072104] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fe720 00:24:24.376 [2024-05-13 18:37:40.073599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.073629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.081542] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fda78 00:24:24.376 [2024-05-13 18:37:40.082144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.082183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.093462] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fb8b8 00:24:24.376 [2024-05-13 18:37:40.094503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.094541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.105164] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f2510 00:24:24.376 [2024-05-13 18:37:40.105757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.105797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.116725] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190df550 00:24:24.376 [2024-05-13 18:37:40.117374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:18922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.117412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.127188] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e1b48 00:24:24.376 [2024-05-13 18:37:40.127956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.128002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.141096] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ee190 00:24:24.376 [2024-05-13 18:37:40.142332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.142370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.151941] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ea680 00:24:24.376 [2024-05-13 18:37:40.153035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.153073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.163838] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e0a68 00:24:24.376 [2024-05-13 18:37:40.165225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.165262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.174532] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fe720 00:24:24.376 [2024-05-13 18:37:40.175701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.175739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.185745] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f9b30 00:24:24.376 [2024-05-13 18:37:40.186860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.186897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.197350] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f4f40 00:24:24.376 [2024-05-13 18:37:40.198460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.198498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.210872] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6fa8 00:24:24.376 [2024-05-13 18:37:40.212606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.212660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.222619] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ddc00 00:24:24.376 [2024-05-13 18:37:40.224332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.224367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.233501] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190dece0 00:24:24.376 [2024-05-13 18:37:40.235100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.376 [2024-05-13 18:37:40.235138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:24.376 [2024-05-13 18:37:40.241973] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f4298 00:24:24.377 [2024-05-13 18:37:40.242806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.377 [2024-05-13 18:37:40.242843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:24.377 [2024-05-13 18:37:40.253667] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6300 00:24:24.377 [2024-05-13 18:37:40.254457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.377 [2024-05-13 18:37:40.254495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:24.377 [2024-05-13 18:37:40.267186] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e0630 00:24:24.377 [2024-05-13 18:37:40.268161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.377 [2024-05-13 18:37:40.268206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.377 [2024-05-13 18:37:40.278274] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ee190 00:24:24.377 [2024-05-13 18:37:40.279166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.377 [2024-05-13 18:37:40.279210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.377 [2024-05-13 18:37:40.289309] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6fa8 00:24:24.377 [2024-05-13 18:37:40.290018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.377 [2024-05-13 18:37:40.290056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:24.377 [2024-05-13 18:37:40.299785] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e01f8 00:24:24.377 [2024-05-13 18:37:40.300643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.377 [2024-05-13 18:37:40.300694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:24.377 [2024-05-13 18:37:40.313926] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f1430 00:24:24.377 [2024-05-13 18:37:40.315233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.377 [2024-05-13 18:37:40.315276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.324490] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fe720 00:24:24.636 [2024-05-13 18:37:40.325682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.325721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.336635] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ee5c8 00:24:24.636 [2024-05-13 18:37:40.338128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.338171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.348193] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ff3c8 00:24:24.636 [2024-05-13 18:37:40.349568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.349620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.360122] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eea00 00:24:24.636 [2024-05-13 18:37:40.361025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.361072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.371182] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ecc78 00:24:24.636 [2024-05-13 18:37:40.371951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.371997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.381633] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6300 00:24:24.636 [2024-05-13 18:37:40.382485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.382523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.395461] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f9b30 00:24:24.636 [2024-05-13 18:37:40.397024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.397066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.405947] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eb760 00:24:24.636 [2024-05-13 18:37:40.407846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.407891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.418375] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f6890 00:24:24.636 [2024-05-13 18:37:40.419331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.419375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.429460] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ecc78 00:24:24.636 [2024-05-13 18:37:40.430282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.430327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.440491] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fe720 00:24:24.636 [2024-05-13 18:37:40.441165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:16004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.636 [2024-05-13 18:37:40.441208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:24.636 [2024-05-13 18:37:40.454671] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fb8b8 00:24:24.636 [2024-05-13 18:37:40.456503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.456543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.462842] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190eea00 00:24:24.637 [2024-05-13 18:37:40.463607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.463643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.476800] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190efae0 00:24:24.637 [2024-05-13 18:37:40.478365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.478410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.487724] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f6cc8 00:24:24.637 [2024-05-13 18:37:40.489149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.489194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.499211] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fd640 00:24:24.637 [2024-05-13 18:37:40.500333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.500375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.510717] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190f7da8 00:24:24.637 [2024-05-13 18:37:40.511950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.511989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.522272] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190de8a8 00:24:24.637 [2024-05-13 18:37:40.523036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.523081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.533925] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e3060 00:24:24.637 [2024-05-13 18:37:40.535033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.535077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.544968] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e6fa8 00:24:24.637 [2024-05-13 18:37:40.545921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.545964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.558900] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190de470 00:24:24.637 [2024-05-13 18:37:40.560697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.560740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:24.637 [2024-05-13 18:37:40.567297] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fac10 00:24:24.637 [2024-05-13 18:37:40.568121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.637 [2024-05-13 18:37:40.568164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:24.896 [2024-05-13 18:37:40.581263] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ebfd0 00:24:24.896 [2024-05-13 18:37:40.582704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.896 [2024-05-13 18:37:40.582739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.896 [2024-05-13 18:37:40.592846] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190ee5c8 00:24:24.896 [2024-05-13 18:37:40.593823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.896 [2024-05-13 18:37:40.593863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:24.896 [2024-05-13 18:37:40.604587] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e7c50 00:24:24.896 [2024-05-13 18:37:40.605899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.896 [2024-05-13 18:37:40.605941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:24.896 [2024-05-13 18:37:40.615710] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190fe2e8 00:24:24.896 [2024-05-13 18:37:40.617077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.896 [2024-05-13 18:37:40.617120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:24.896 [2024-05-13 18:37:40.628033] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cacb50) with pdu=0x2000190e27f0 00:24:24.896 [2024-05-13 18:37:40.629527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:14497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.896 [2024-05-13 18:37:40.629584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:24.896 00:24:24.896 Latency(us) 00:24:24.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.896 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:24.896 nvme0n1 : 2.01 21779.24 85.08 0.00 0.00 5870.61 2398.02 16443.58 00:24:24.896 =================================================================================================================== 00:24:24.896 Total : 21779.24 85.08 0.00 0.00 5870.61 2398.02 16443.58 00:24:24.896 0 00:24:24.896 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:24.896 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:24.896 | .driver_specific 00:24:24.896 | .nvme_error 00:24:24.896 | .status_code 00:24:24.896 | .command_transient_transport_error' 00:24:24.896 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:24.896 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 171 > 0 )) 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95860 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95860 ']' 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95860 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95860 00:24:25.154 killing process with pid 95860 00:24:25.154 Received shutdown signal, test time was about 2.000000 seconds 00:24:25.154 00:24:25.154 Latency(us) 00:24:25.154 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.154 =================================================================================================================== 00:24:25.154 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95860' 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95860 00:24:25.154 18:37:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95860 00:24:25.412 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:25.412 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:25.412 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:25.412 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:25.412 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:25.412 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=95955 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 95955 /var/tmp/bperf.sock 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 95955 ']' 00:24:25.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:25.413 18:37:41 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.413 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:25.413 Zero copy mechanism will not be used. 00:24:25.413 [2024-05-13 18:37:41.301734] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:25.413 [2024-05-13 18:37:41.301819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95955 ] 00:24:25.671 [2024-05-13 18:37:41.437406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.671 [2024-05-13 18:37:41.557001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.606 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:26.606 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:24:26.606 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:26.606 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:26.864 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:26.864 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:26.864 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:26.864 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:26.864 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.864 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:27.123 nvme0n1 00:24:27.123 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:27.123 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.123 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.123 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.123 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:27.123 18:37:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:27.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:27.123 Zero copy mechanism will not be used. 00:24:27.123 Running I/O for 2 seconds... 00:24:27.123 [2024-05-13 18:37:42.988910] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:42.989270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:42.989305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:42.995641] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:42.995967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:42.996003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.002202] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.002499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.002533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.008491] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.008831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.008865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.015134] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.015433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.015471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.021688] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.022032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.022077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.028531] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.028890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.028926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.035305] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.035626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.035690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.041978] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.042306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.042347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.048497] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.048868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.048908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.055191] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.055548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.055593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.123 [2024-05-13 18:37:43.062004] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.123 [2024-05-13 18:37:43.062330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.123 [2024-05-13 18:37:43.062388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.382 [2024-05-13 18:37:43.068866] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.382 [2024-05-13 18:37:43.069180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.382 [2024-05-13 18:37:43.069226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.382 [2024-05-13 18:37:43.075403] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.382 [2024-05-13 18:37:43.075774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.382 [2024-05-13 18:37:43.075808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.382 [2024-05-13 18:37:43.081947] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.382 [2024-05-13 18:37:43.082301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.382 [2024-05-13 18:37:43.082336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.382 [2024-05-13 18:37:43.088542] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.382 [2024-05-13 18:37:43.088936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.382 [2024-05-13 18:37:43.088970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.382 [2024-05-13 18:37:43.095242] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.382 [2024-05-13 18:37:43.095584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.382 [2024-05-13 18:37:43.095629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.382 [2024-05-13 18:37:43.101990] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.382 [2024-05-13 18:37:43.102357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.382 [2024-05-13 18:37:43.102392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.382 [2024-05-13 18:37:43.108817] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.382 [2024-05-13 18:37:43.109130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.382 [2024-05-13 18:37:43.109158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.115496] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.115844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.115873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.122202] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.122537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.122594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.128889] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.129197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.129233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.135416] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.135733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.135771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.141928] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.142244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.142276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.148364] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.148714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.148751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.154809] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.155123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.155160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.161247] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.161591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.161632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.167774] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.168088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.168122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.174162] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.174461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.174495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.180473] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.180807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.180842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.186915] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.187212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.187247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.193688] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.194006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.194039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.200191] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.200512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.200550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.206828] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.207162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.207195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.213316] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.213646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.213684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.219848] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.220186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.220222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.226238] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.226549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.226605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.232893] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.233207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.233241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.239228] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.239545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.239584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.245644] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.245971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.246009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.252120] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.252445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.252480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.258566] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.258914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.258948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.265062] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.265377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.383 [2024-05-13 18:37:43.265414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.383 [2024-05-13 18:37:43.271556] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.383 [2024-05-13 18:37:43.271897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.271936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.277891] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.278237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.278277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.284297] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.284623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.284658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.290690] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.291004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.291044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.297379] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.297715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.297748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.303940] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.304263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.304297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.310488] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.310830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.310864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.317008] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.317331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.317365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.384 [2024-05-13 18:37:43.323384] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.384 [2024-05-13 18:37:43.323713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.384 [2024-05-13 18:37:43.323747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.329672] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.329994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.330027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.336003] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.336317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.336350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.342382] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.342734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.342777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.348879] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.349196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.349231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.355390] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.355704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.355736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.361787] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.362101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.362150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.368255] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.368580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.368607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.374736] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.375064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.375113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.381180] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.381481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.381509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.387567] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.387902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.387940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.394026] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.394345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.394379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.400398] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.400745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.400772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.406833] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.407133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.407170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.413177] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.413504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.413539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.419642] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.419968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.420004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.426068] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.426396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.426437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.432490] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.432852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.432884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.438981] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.439306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.439343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.445424] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.445754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.445789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.451833] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.452160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.452197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.458323] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.458674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.458710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.464786] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.465100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.465134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.471154] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.471452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.471476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.477495] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.477822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.477857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.483789] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.484103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.484141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.490204] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.490531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.490558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.496623] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.496948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.496988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.503031] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.503357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.503396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.509557] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.509901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.509933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.516048] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.516374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.516415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.522553] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.522902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.522937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.529068] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.644 [2024-05-13 18:37:43.529392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.644 [2024-05-13 18:37:43.529427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.644 [2024-05-13 18:37:43.535476] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.535810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.535844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.645 [2024-05-13 18:37:43.541886] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.542185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.542219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.645 [2024-05-13 18:37:43.548374] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.548719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.548753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.645 [2024-05-13 18:37:43.554830] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.555151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.555199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.645 [2024-05-13 18:37:43.561229] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.561553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.561605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.645 [2024-05-13 18:37:43.567692] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.568009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.568055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.645 [2024-05-13 18:37:43.574247] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.574555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.574619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.645 [2024-05-13 18:37:43.580827] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.645 [2024-05-13 18:37:43.581156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.645 [2024-05-13 18:37:43.581191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.921 [2024-05-13 18:37:43.587359] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.921 [2024-05-13 18:37:43.587677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.921 [2024-05-13 18:37:43.587714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.921 [2024-05-13 18:37:43.593858] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.921 [2024-05-13 18:37:43.594182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.921 [2024-05-13 18:37:43.594207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.921 [2024-05-13 18:37:43.600395] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.921 [2024-05-13 18:37:43.600713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.921 [2024-05-13 18:37:43.600738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.606853] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.607167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.607203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.613272] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.613606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.613646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.619733] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.620055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.620098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.626132] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.626434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.626484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.632585] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.632917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.632964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.638985] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.639304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.639347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.645484] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.645822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.645863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.651880] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.652194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.652238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.658330] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.658654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.658692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.664727] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.665044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.665081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.671144] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.671467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.671501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.677678] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.678018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.678055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.684098] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.684413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.684454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.690547] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.690886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.690929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.696968] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.697292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.697335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.703367] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.922 [2024-05-13 18:37:43.703705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.922 [2024-05-13 18:37:43.703739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.922 [2024-05-13 18:37:43.709723] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.710040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.710082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.716117] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.716435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.716479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.722490] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.722833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.722881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.728981] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.729293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.729331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.735313] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.735641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.735676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.741670] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.741973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.742008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.748017] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.748334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.748377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.754406] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.754747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.754784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.760873] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.761196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.761231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.767294] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.767625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.767668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.773696] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.774014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.774055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.780179] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.780494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.780533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.786554] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.786890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.786927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.792970] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.793282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.793317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.799400] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.799728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.923 [2024-05-13 18:37:43.799763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.923 [2024-05-13 18:37:43.805833] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.923 [2024-05-13 18:37:43.806155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.806189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.924 [2024-05-13 18:37:43.812376] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.924 [2024-05-13 18:37:43.812716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.812758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.924 [2024-05-13 18:37:43.818934] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.924 [2024-05-13 18:37:43.819249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.819281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.924 [2024-05-13 18:37:43.825343] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.924 [2024-05-13 18:37:43.825676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.825710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.924 [2024-05-13 18:37:43.832000] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.924 [2024-05-13 18:37:43.832301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.832344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.924 [2024-05-13 18:37:43.838529] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.924 [2024-05-13 18:37:43.838864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.838909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.924 [2024-05-13 18:37:43.844896] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.924 [2024-05-13 18:37:43.845215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.845261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.924 [2024-05-13 18:37:43.851364] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:27.924 [2024-05-13 18:37:43.851703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.924 [2024-05-13 18:37:43.851743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.857884] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.858195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.858234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.864201] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.864522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.864561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.870566] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.870893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.870926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.876881] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.877180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.877218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.883376] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.883701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.883735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.889810] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.890130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.890164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.896138] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.896451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.896487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.902517] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.902849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.902883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.908891] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.909205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.909240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.915274] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.915599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.915634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.921548] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.921865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.921898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.927900] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.928201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.928237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.934195] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.934508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.934544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.940763] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.941089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.941125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.947237] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.947537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.947584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.953694] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.954010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.954046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.960009] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.960323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.960363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.966530] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.966872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.966911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.972854] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.973185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.973224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.979231] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.979547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.979589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.985478] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.985809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.985846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.991893] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.992203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.992230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:43.998271] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:43.998585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:43.998624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.004644] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.004969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.005012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.011091] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.011423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.011460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.017734] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.018061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.018098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.024157] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.024486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.024533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.030508] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.030823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.030869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.036913] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.037270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.037320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.043820] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.044175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.044237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.050631] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.050978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.051022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.056513] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.056957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.057006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.062520] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.062785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.062823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.068445] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.068751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.068779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.074339] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.074634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.074669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.080282] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.080631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.080685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.086095] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.086417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.086473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.091699] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.091930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.091958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.097444] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.097680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.097706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.103033] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.103253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.103279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.108719] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.191 [2024-05-13 18:37:44.108949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.191 [2024-05-13 18:37:44.108993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.191 [2024-05-13 18:37:44.114352] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.192 [2024-05-13 18:37:44.114589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.192 [2024-05-13 18:37:44.114613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.192 [2024-05-13 18:37:44.120093] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.192 [2024-05-13 18:37:44.120329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.192 [2024-05-13 18:37:44.120363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.192 [2024-05-13 18:37:44.125919] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.192 [2024-05-13 18:37:44.126140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.192 [2024-05-13 18:37:44.126163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.192 [2024-05-13 18:37:44.131711] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.192 [2024-05-13 18:37:44.131931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.192 [2024-05-13 18:37:44.131960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.137427] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.137659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.137683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.143106] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.143326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.143350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.148802] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.149036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.149062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.154556] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.154804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.154827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.160358] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.160593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.160617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.166116] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.166337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.166360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.171811] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.172037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.172071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.177481] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.177720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.177744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.183187] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.183415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.183438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.188949] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.189172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.189195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.194590] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.194830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.194864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.200202] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.200422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.200446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.205908] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.206127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.206151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.211786] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.212006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.212029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.217844] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.218068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.218091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.223542] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.223795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.223823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.229178] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.229400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.229422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.234868] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.235088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.235111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.240476] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.240725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.240754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.246114] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.246339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.246362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.251707] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.251945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.251968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.257434] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.257672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.257695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.263005] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.263242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.263267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.268646] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.268893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.268923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.274335] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.274557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.274594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.279973] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.451 [2024-05-13 18:37:44.280192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.451 [2024-05-13 18:37:44.280215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.451 [2024-05-13 18:37:44.285616] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.285840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.285863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.291233] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.291451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.291475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.296960] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.297180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.297203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.302552] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.302791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.302815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.308207] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.308432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.308464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.313908] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.314131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.314154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.319434] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.319669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.319693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.325024] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.325243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.325266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.330502] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.330738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.330768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.336127] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.336349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.336373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.341804] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.342025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.342048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.347466] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.347708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.347734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.353159] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.353378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.353400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.358808] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.359029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.359052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.364430] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.364653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.364678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.370075] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.370294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.370317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.375685] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.375904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.375928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.381451] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.381677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.381700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.387104] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.387313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.387336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.452 [2024-05-13 18:37:44.392761] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.452 [2024-05-13 18:37:44.392975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.452 [2024-05-13 18:37:44.392998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.398373] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.398599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.398623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.404046] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.404276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.404300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.409760] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.409974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.409997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.415452] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.415688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.415711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.421178] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.421401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.421425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.426994] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.427203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.427227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.432737] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.432956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.432979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.438402] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.438626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.438650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.444029] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.444236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.444259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.449730] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.449939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.449962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.455370] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.455588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.455613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.461038] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.461260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.461283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.466770] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.712 [2024-05-13 18:37:44.466983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.712 [2024-05-13 18:37:44.467007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.712 [2024-05-13 18:37:44.472453] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.472699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.472723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.478286] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.478498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.478523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.484017] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.484235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.484261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.489713] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.489924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.489958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.495336] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.495547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.495584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.501057] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.501265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.501289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.506741] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.506950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.506978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.512355] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.512606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.512629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.518133] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.518339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.518369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.523875] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.524094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.524118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.529590] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.529798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.529821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.535297] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.535511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.535541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.540980] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.541199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.541222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.546777] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.546985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.547014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.552379] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.552617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.552641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.558047] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.558274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.558308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.563805] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.564022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.564045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.569591] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.569798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.569826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.575301] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.575514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.575537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.580975] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.581183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.581207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.586651] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.586882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.586911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.592321] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.592534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.592563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.598042] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.598255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.598279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.603766] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.603975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.604008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.609471] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.609697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.609720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.615154] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.615362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.615404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.620818] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.621029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.621053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.626446] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.713 [2024-05-13 18:37:44.626684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.713 [2024-05-13 18:37:44.626708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.713 [2024-05-13 18:37:44.632178] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.714 [2024-05-13 18:37:44.632391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.714 [2024-05-13 18:37:44.632414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.714 [2024-05-13 18:37:44.637990] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.714 [2024-05-13 18:37:44.638198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.714 [2024-05-13 18:37:44.638222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.714 [2024-05-13 18:37:44.643718] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.714 [2024-05-13 18:37:44.643925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.714 [2024-05-13 18:37:44.643948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.714 [2024-05-13 18:37:44.649544] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.714 [2024-05-13 18:37:44.649771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.714 [2024-05-13 18:37:44.649807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.655185] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.655408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.655443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.661031] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.661268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.661303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.666805] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.667014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.667044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.672506] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.672741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.672782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.678218] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.678429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.678453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.683876] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.684081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.684105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.689489] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.689723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.689747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.695138] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.695345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.695386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.700763] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.700978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.701001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.706354] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.706560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.706598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.712069] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.973 [2024-05-13 18:37:44.712278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.973 [2024-05-13 18:37:44.712309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.973 [2024-05-13 18:37:44.717817] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.718028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.718052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.723411] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.723646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.723670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.729166] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.729377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.729404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.734914] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.735124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.735150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.740670] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.740896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.740934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.746420] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.746643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.746679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.752129] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.752335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.752362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.757762] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.757975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.758007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.763521] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.763748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.763784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.769249] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.769459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.769498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.774944] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.775156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.775183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.780655] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.780878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.780905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.786379] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.786623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.786656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.792654] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.793110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.793185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.799091] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.799229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.799264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.804744] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.804874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.804914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.810417] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.810555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.810605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.816140] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.816241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.816276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.821767] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.821907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.821939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.827515] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.827663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.827695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.833118] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.833321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.833352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.838766] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.838859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.838891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.844454] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.844598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.844633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.850177] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.850265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.850299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.855830] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.855940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.855969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.861514] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.861659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.861688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.867124] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.867222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.867250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.974 [2024-05-13 18:37:44.872817] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.974 [2024-05-13 18:37:44.872931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.974 [2024-05-13 18:37:44.872966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.975 [2024-05-13 18:37:44.878505] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.975 [2024-05-13 18:37:44.878640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.975 [2024-05-13 18:37:44.878669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.975 [2024-05-13 18:37:44.884173] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.975 [2024-05-13 18:37:44.884326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.975 [2024-05-13 18:37:44.884364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.975 [2024-05-13 18:37:44.889901] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.975 [2024-05-13 18:37:44.889990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.975 [2024-05-13 18:37:44.890028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.975 [2024-05-13 18:37:44.895446] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.975 [2024-05-13 18:37:44.895562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.975 [2024-05-13 18:37:44.895609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.975 [2024-05-13 18:37:44.901271] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.975 [2024-05-13 18:37:44.901436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.975 [2024-05-13 18:37:44.901473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.975 [2024-05-13 18:37:44.906993] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.975 [2024-05-13 18:37:44.907142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.975 [2024-05-13 18:37:44.907170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.975 [2024-05-13 18:37:44.912749] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:28.975 [2024-05-13 18:37:44.912906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.975 [2024-05-13 18:37:44.912934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.234 [2024-05-13 18:37:44.918459] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.234 [2024-05-13 18:37:44.918629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.234 [2024-05-13 18:37:44.918659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.234 [2024-05-13 18:37:44.924304] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.234 [2024-05-13 18:37:44.924461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.234 [2024-05-13 18:37:44.924489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.234 [2024-05-13 18:37:44.930057] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.234 [2024-05-13 18:37:44.930180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.234 [2024-05-13 18:37:44.930209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.234 [2024-05-13 18:37:44.935805] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.234 [2024-05-13 18:37:44.935937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.234 [2024-05-13 18:37:44.935970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.234 [2024-05-13 18:37:44.941488] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.234 [2024-05-13 18:37:44.941637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.234 [2024-05-13 18:37:44.941666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.234 [2024-05-13 18:37:44.947189] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.234 [2024-05-13 18:37:44.947290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.235 [2024-05-13 18:37:44.947319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.235 [2024-05-13 18:37:44.952806] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.235 [2024-05-13 18:37:44.952939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.235 [2024-05-13 18:37:44.952968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.235 [2024-05-13 18:37:44.958465] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.235 [2024-05-13 18:37:44.958606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.235 [2024-05-13 18:37:44.958636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.235 [2024-05-13 18:37:44.964058] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.235 [2024-05-13 18:37:44.964196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.235 [2024-05-13 18:37:44.964225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.235 [2024-05-13 18:37:44.969788] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.235 [2024-05-13 18:37:44.969914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.235 [2024-05-13 18:37:44.969943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.235 [2024-05-13 18:37:44.975381] tcp.c:2055:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1caccf0) with pdu=0x2000190fef90 00:24:29.235 [2024-05-13 18:37:44.975487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.235 [2024-05-13 18:37:44.975515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.235 00:24:29.235 Latency(us) 00:24:29.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.235 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:29.235 nvme0n1 : 2.00 5064.24 633.03 0.00 0.00 3152.05 2323.55 12451.84 00:24:29.235 =================================================================================================================== 00:24:29.235 Total : 5064.24 633.03 0.00 0.00 3152.05 2323.55 12451.84 00:24:29.235 0 00:24:29.235 18:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:29.235 18:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:29.235 18:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:29.235 18:37:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:29.235 | .driver_specific 00:24:29.235 | .nvme_error 00:24:29.235 | .status_code 00:24:29.235 | .command_transient_transport_error' 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 327 > 0 )) 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 95955 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95955 ']' 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95955 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95955 00:24:29.493 killing process with pid 95955 00:24:29.493 Received shutdown signal, test time was about 2.000000 seconds 00:24:29.493 00:24:29.493 Latency(us) 00:24:29.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.493 =================================================================================================================== 00:24:29.493 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95955' 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95955 00:24:29.493 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95955 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 95635 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 95635 ']' 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 95635 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95635 00:24:29.751 killing process with pid 95635 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95635' 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 95635 00:24:29.751 [2024-05-13 18:37:45.557039] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:29.751 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 95635 00:24:30.009 00:24:30.009 real 0m19.420s 00:24:30.009 user 0m36.988s 00:24:30.009 sys 0m5.047s 00:24:30.009 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:30.009 18:37:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:30.009 ************************************ 00:24:30.009 END TEST nvmf_digest_error 00:24:30.009 ************************************ 00:24:30.268 18:37:45 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:30.268 18:37:45 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:30.268 18:37:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:30.268 18:37:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.268 rmmod nvme_tcp 00:24:30.268 rmmod nvme_fabrics 00:24:30.268 rmmod nvme_keyring 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 95635 ']' 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 95635 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 95635 ']' 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 95635 00:24:30.268 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (95635) - No such process 00:24:30.268 Process with pid 95635 is not found 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 95635 is not found' 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:30.268 00:24:30.268 real 0m39.346s 00:24:30.268 user 1m13.886s 00:24:30.268 sys 0m9.982s 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:30.268 18:37:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:30.268 ************************************ 00:24:30.268 END TEST nvmf_digest 00:24:30.268 ************************************ 00:24:30.268 18:37:46 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 1 -eq 1 ]] 00:24:30.268 18:37:46 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ tcp == \t\c\p ]] 00:24:30.268 18:37:46 nvmf_tcp -- nvmf/nvmf.sh@111 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:30.268 18:37:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:30.268 18:37:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:30.268 18:37:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.268 ************************************ 00:24:30.268 START TEST nvmf_mdns_discovery 00:24:30.268 ************************************ 00:24:30.268 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:24:30.527 * Looking for test storage... 00:24:30.527 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:30.527 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:30.527 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:30.527 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@47 -- # : 0 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:30.528 Cannot find device "nvmf_tgt_br" 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # true 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:30.528 Cannot find device "nvmf_tgt_br2" 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # true 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:30.528 Cannot find device "nvmf_tgt_br" 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # true 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:30.528 Cannot find device "nvmf_tgt_br2" 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # true 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:30.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:30.528 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:30.528 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:30.786 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:30.787 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.787 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:24:30.787 00:24:30.787 --- 10.0.0.2 ping statistics --- 00:24:30.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.787 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:30.787 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:30.787 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:24:30.787 00:24:30.787 --- 10.0.0.3 ping statistics --- 00:24:30.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.787 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:30.787 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.787 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:24:30.787 00:24:30.787 --- 10.0.0.1 ping statistics --- 00:24:30.787 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.787 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@433 -- # return 0 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@481 -- # nvmfpid=96245 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # waitforlisten 96245 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 96245 ']' 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:30.787 18:37:46 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.044 [2024-05-13 18:37:46.735942] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:31.044 [2024-05-13 18:37:46.736043] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.044 [2024-05-13 18:37:46.876893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.301 [2024-05-13 18:37:47.004352] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.301 [2024-05-13 18:37:47.004414] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.301 [2024-05-13 18:37:47.004430] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.301 [2024-05-13 18:37:47.004441] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.301 [2024-05-13 18:37:47.004450] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.301 [2024-05-13 18:37:47.004488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:31.866 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.125 [2024-05-13 18:37:47.906315] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.125 [2024-05-13 18:37:47.918256] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:32.125 [2024-05-13 18:37:47.918494] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.125 null0 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.125 null1 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.125 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.126 null2 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.126 null3 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # hostpid=96295 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # waitforlisten 96295 /tmp/host.sock 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@827 -- # '[' -z 96295 ']' 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:32.126 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:32.126 18:37:47 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:32.126 [2024-05-13 18:37:48.032514] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:32.126 [2024-05-13 18:37:48.032645] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96295 ] 00:24:32.383 [2024-05-13 18:37:48.177730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.383 [2024-05-13 18:37:48.312260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@860 -- # return 0 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # avahipid=96324 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # sleep 1 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:24:33.370 18:37:49 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:24:33.370 Process 1009 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:24:33.370 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:24:33.370 Successfully dropped root privileges. 00:24:33.370 avahi-daemon 0.8 starting up. 00:24:33.370 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:24:33.370 Successfully called chroot(). 00:24:33.370 Successfully dropped remaining capabilities. 00:24:34.304 No service file found in /etc/avahi/services. 00:24:34.304 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:34.304 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:24:34.304 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:34.304 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:24:34.304 Network interface enumeration completed. 00:24:34.304 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:24:34.304 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:24:34.304 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:24:34.304 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.304 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 1498539250. 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # notify_id=0 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:34.304 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.562 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.563 [2024-05-13 18:37:50.462211] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:34.563 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.821 [2024-05-13 18:37:50.535106] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.821 [2024-05-13 18:37:50.575075] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:34.821 [2024-05-13 18:37:50.583005] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # avahi_clientpid=96375 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # sleep 5 00:24:34.821 18:37:50 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:24:35.753 [2024-05-13 18:37:51.362211] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:35.753 Established under name 'CDC' 00:24:36.012 [2024-05-13 18:37:51.762241] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:36.012 [2024-05-13 18:37:51.762293] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:24:36.012 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:36.012 cookie is 0 00:24:36.012 is_local: 1 00:24:36.012 our_own: 0 00:24:36.012 wide_area: 0 00:24:36.012 multicast: 1 00:24:36.012 cached: 1 00:24:36.012 [2024-05-13 18:37:51.862226] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:36.012 [2024-05-13 18:37:51.862287] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:24:36.012 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:36.012 cookie is 0 00:24:36.012 is_local: 1 00:24:36.012 our_own: 0 00:24:36.012 wide_area: 0 00:24:36.012 multicast: 1 00:24:36.012 cached: 1 00:24:36.981 [2024-05-13 18:37:52.772081] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:36.981 [2024-05-13 18:37:52.772134] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:36.981 [2024-05-13 18:37:52.772155] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:36.981 [2024-05-13 18:37:52.858227] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:24:36.981 [2024-05-13 18:37:52.871837] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:36.981 [2024-05-13 18:37:52.871874] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:36.981 [2024-05-13 18:37:52.871902] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:36.981 [2024-05-13 18:37:52.922065] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:36.981 [2024-05-13 18:37:52.922133] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:37.240 [2024-05-13 18:37:52.958521] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:24:37.240 [2024-05-13 18:37:53.014035] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:37.240 [2024-05-13 18:37:53.014099] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:24:39.772 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:24:40.031 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.291 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:24:40.291 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # get_notification_count 00:24:40.291 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:40.291 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.291 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.291 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:40.291 18:37:55 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=2 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=2 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:40.291 18:37:56 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@139 -- # sleep 1 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@142 -- # get_notification_count 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=2 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.226 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.485 [2024-05-13 18:37:57.173942] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:41.485 [2024-05-13 18:37:57.174226] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:41.485 [2024-05-13 18:37:57.174295] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:41.485 [2024-05-13 18:37:57.175166] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:41.485 [2024-05-13 18:37:57.175216] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:41.485 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.486 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:24:41.486 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.486 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.486 [2024-05-13 18:37:57.181844] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:24:41.486 [2024-05-13 18:37:57.182193] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:41.486 [2024-05-13 18:37:57.182246] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:41.486 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.486 18:37:57 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 1 00:24:41.486 [2024-05-13 18:37:57.312400] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:24:41.486 [2024-05-13 18:37:57.312968] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:24:41.486 [2024-05-13 18:37:57.375359] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:41.486 [2024-05-13 18:37:57.375468] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:41.486 [2024-05-13 18:37:57.375478] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:41.486 [2024-05-13 18:37:57.375555] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:41.486 [2024-05-13 18:37:57.376037] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:41.486 [2024-05-13 18:37:57.376075] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:41.486 [2024-05-13 18:37:57.376083] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:41.486 [2024-05-13 18:37:57.376115] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:41.486 [2024-05-13 18:37:57.421736] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:41.486 [2024-05-13 18:37:57.421827] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:41.486 [2024-05-13 18:37:57.421915] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:24:41.486 [2024-05-13 18:37:57.421925] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@155 -- # get_notification_count 00:24:42.491 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:42.492 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:42.492 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.492 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.492 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=0 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.762 [2024-05-13 18:37:58.471189] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:42.762 [2024-05-13 18:37:58.471236] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:42.762 [2024-05-13 18:37:58.471276] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:42.762 [2024-05-13 18:37:58.471291] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:42.762 [2024-05-13 18:37:58.473944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.473990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.474013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.474029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.474040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.474050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.474060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.474069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.474078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.762 [2024-05-13 18:37:58.479226] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:42.762 [2024-05-13 18:37:58.479292] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:24:42.762 [2024-05-13 18:37:58.480645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.480702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.480717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.480728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.480737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.480746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.480756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.762 [2024-05-13 18:37:58.480764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.762 [2024-05-13 18:37:58.480774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.762 18:37:58 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # sleep 1 00:24:42.762 [2024-05-13 18:37:58.483892] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.762 [2024-05-13 18:37:58.490601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.762 [2024-05-13 18:37:58.493917] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.762 [2024-05-13 18:37:58.494075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.494133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.494150] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-05-13 18:37:58.494163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.762 [2024-05-13 18:37:58.494184] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.762 [2024-05-13 18:37:58.494220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.762 [2024-05-13 18:37:58.494232] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.762 [2024-05-13 18:37:58.494243] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.762 [2024-05-13 18:37:58.494260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-05-13 18:37:58.500614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.762 [2024-05-13 18:37:58.500737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.500788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.500805] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.762 [2024-05-13 18:37:58.500816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.762 [2024-05-13 18:37:58.500836] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.762 [2024-05-13 18:37:58.500851] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.762 [2024-05-13 18:37:58.500861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.762 [2024-05-13 18:37:58.500871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.762 [2024-05-13 18:37:58.500887] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-05-13 18:37:58.503981] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.762 [2024-05-13 18:37:58.504086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.504136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.504153] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-05-13 18:37:58.504164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.762 [2024-05-13 18:37:58.504182] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.762 [2024-05-13 18:37:58.504197] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.762 [2024-05-13 18:37:58.504206] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.762 [2024-05-13 18:37:58.504216] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.762 [2024-05-13 18:37:58.504260] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-05-13 18:37:58.510688] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.762 [2024-05-13 18:37:58.510812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.510875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.510893] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.762 [2024-05-13 18:37:58.510905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.762 [2024-05-13 18:37:58.510924] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.762 [2024-05-13 18:37:58.510940] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.762 [2024-05-13 18:37:58.510950] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.762 [2024-05-13 18:37:58.510960] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.762 [2024-05-13 18:37:58.510976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.762 [2024-05-13 18:37:58.514046] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.762 [2024-05-13 18:37:58.514141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.514189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.762 [2024-05-13 18:37:58.514206] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.762 [2024-05-13 18:37:58.514217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.762 [2024-05-13 18:37:58.514234] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.762 [2024-05-13 18:37:58.514275] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.762 [2024-05-13 18:37:58.514287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.514296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.763 [2024-05-13 18:37:58.514311] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.520759] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.763 [2024-05-13 18:37:58.520853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.520900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.520917] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.763 [2024-05-13 18:37:58.520941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.520958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.520973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.520983] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.520998] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.763 [2024-05-13 18:37:58.521021] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.524104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.763 [2024-05-13 18:37:58.524195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.524243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.524259] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-05-13 18:37:58.524270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.524286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.524367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.524382] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.524393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.763 [2024-05-13 18:37:58.524408] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.530822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.763 [2024-05-13 18:37:58.530921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.530970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.530990] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.763 [2024-05-13 18:37:58.531008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.531032] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.531047] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.531056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.531065] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.763 [2024-05-13 18:37:58.531080] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.534163] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.763 [2024-05-13 18:37:58.534256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.534303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.534320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-05-13 18:37:58.534330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.534347] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.534379] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.534390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.534399] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.763 [2024-05-13 18:37:58.534413] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.540886] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.763 [2024-05-13 18:37:58.540984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.541046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.541066] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.763 [2024-05-13 18:37:58.541076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.541094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.541108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.541117] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.541126] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.763 [2024-05-13 18:37:58.541141] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.544220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.763 [2024-05-13 18:37:58.544306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.544351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.544368] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-05-13 18:37:58.544378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.544394] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.544425] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.544436] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.544445] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.763 [2024-05-13 18:37:58.544459] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.550946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.763 [2024-05-13 18:37:58.551048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.551100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.551116] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.763 [2024-05-13 18:37:58.551127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.551144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.551159] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.551167] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.551177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.763 [2024-05-13 18:37:58.551192] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.554282] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.763 [2024-05-13 18:37:58.554380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.554429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.554446] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-05-13 18:37:58.554457] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.554474] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.554510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.554522] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.554531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.763 [2024-05-13 18:37:58.554558] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.561009] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.763 [2024-05-13 18:37:58.561126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.561176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.561193] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.763 [2024-05-13 18:37:58.561203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.561220] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.561235] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.561244] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.561254] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.763 [2024-05-13 18:37:58.561269] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.564344] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.763 [2024-05-13 18:37:58.564440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.564487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.564504] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-05-13 18:37:58.564515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.564531] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.564563] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.564589] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.564600] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.763 [2024-05-13 18:37:58.564620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.571080] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.763 [2024-05-13 18:37:58.571190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.571243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.571260] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.763 [2024-05-13 18:37:58.571271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.571289] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.571304] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.571313] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.571322] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.763 [2024-05-13 18:37:58.571338] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.574410] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.763 [2024-05-13 18:37:58.574497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.574544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.574561] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-05-13 18:37:58.574583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.574602] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.574647] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.574660] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.574670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.763 [2024-05-13 18:37:58.574685] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.581150] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.763 [2024-05-13 18:37:58.581239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.581287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.581303] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.763 [2024-05-13 18:37:58.581314] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.581331] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.581345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.581353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.763 [2024-05-13 18:37:58.581363] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.763 [2024-05-13 18:37:58.581378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.763 [2024-05-13 18:37:58.584466] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.763 [2024-05-13 18:37:58.584555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.584617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.763 [2024-05-13 18:37:58.584635] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.763 [2024-05-13 18:37:58.584647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.763 [2024-05-13 18:37:58.584663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.763 [2024-05-13 18:37:58.584710] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.763 [2024-05-13 18:37:58.584723] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.764 [2024-05-13 18:37:58.584732] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.764 [2024-05-13 18:37:58.584747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.764 [2024-05-13 18:37:58.591207] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.764 [2024-05-13 18:37:58.591297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.591344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.591361] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.764 [2024-05-13 18:37:58.591372] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.764 [2024-05-13 18:37:58.591389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.764 [2024-05-13 18:37:58.591403] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.764 [2024-05-13 18:37:58.591412] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.764 [2024-05-13 18:37:58.591421] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.764 [2024-05-13 18:37:58.591436] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.764 [2024-05-13 18:37:58.594525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.764 [2024-05-13 18:37:58.594621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.594668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.594685] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.764 [2024-05-13 18:37:58.594695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.764 [2024-05-13 18:37:58.594711] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.764 [2024-05-13 18:37:58.594755] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.764 [2024-05-13 18:37:58.594769] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.764 [2024-05-13 18:37:58.594778] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.764 [2024-05-13 18:37:58.594792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.764 [2024-05-13 18:37:58.601267] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:24:42.764 [2024-05-13 18:37:58.601361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.601409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.601425] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18eafe0 with addr=10.0.0.3, port=4420 00:24:42.764 [2024-05-13 18:37:58.601436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18eafe0 is same with the state(5) to be set 00:24:42.764 [2024-05-13 18:37:58.601452] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eafe0 (9): Bad file descriptor 00:24:42.764 [2024-05-13 18:37:58.601467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:24:42.764 [2024-05-13 18:37:58.601476] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:24:42.764 [2024-05-13 18:37:58.601485] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:24:42.764 [2024-05-13 18:37:58.601500] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.764 [2024-05-13 18:37:58.604591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:42.764 [2024-05-13 18:37:58.604675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.604739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.764 [2024-05-13 18:37:58.604756] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1928820 with addr=10.0.0.2, port=4420 00:24:42.764 [2024-05-13 18:37:58.604766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1928820 is same with the state(5) to be set 00:24:42.764 [2024-05-13 18:37:58.604783] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1928820 (9): Bad file descriptor 00:24:42.764 [2024-05-13 18:37:58.604815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:42.764 [2024-05-13 18:37:58.604826] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:42.764 [2024-05-13 18:37:58.604836] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:42.764 [2024-05-13 18:37:58.604851] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.764 [2024-05-13 18:37:58.609828] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:42.764 [2024-05-13 18:37:58.609863] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:42.764 [2024-05-13 18:37:58.609904] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:42.764 [2024-05-13 18:37:58.609943] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:24:42.764 [2024-05-13 18:37:58.609959] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:42.764 [2024-05-13 18:37:58.609973] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:42.764 [2024-05-13 18:37:58.695913] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:42.764 [2024-05-13 18:37:58.695995] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:24:43.699 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # sort -n 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@72 -- # xargs 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=0 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=4 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:43.958 18:37:59 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # sleep 1 00:24:43.958 [2024-05-13 18:37:59.862325] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:24:44.890 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # xargs 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # sort 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:24:45.148 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:24:45.149 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:24:45.149 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:24:45.149 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.149 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.149 18:38:00 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # notification_count=4 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # notify_id=8 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.149 [2024-05-13 18:38:01.051466] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:24:45.149 2024/05/13 18:38:01 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:45.149 request: 00:24:45.149 { 00:24:45.149 "method": "bdev_nvme_start_mdns_discovery", 00:24:45.149 "params": { 00:24:45.149 "name": "mdns", 00:24:45.149 "svcname": "_nvme-disc._http", 00:24:45.149 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:45.149 } 00:24:45.149 } 00:24:45.149 Got JSON-RPC error response 00:24:45.149 GoRPCClient: error on JSON-RPC call 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:45.149 18:38:01 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # sleep 5 00:24:45.714 [2024-05-13 18:38:01.440112] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:24:45.714 [2024-05-13 18:38:01.540100] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:24:45.714 [2024-05-13 18:38:01.640113] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:45.714 [2024-05-13 18:38:01.640163] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:24:45.714 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:45.714 cookie is 0 00:24:45.714 is_local: 1 00:24:45.714 our_own: 0 00:24:45.714 wide_area: 0 00:24:45.714 multicast: 1 00:24:45.714 cached: 1 00:24:45.973 [2024-05-13 18:38:01.740119] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:24:45.973 [2024-05-13 18:38:01.740174] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:24:45.973 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:24:45.973 cookie is 0 00:24:45.973 is_local: 1 00:24:45.973 our_own: 0 00:24:45.973 wide_area: 0 00:24:45.973 multicast: 1 00:24:45.973 cached: 1 00:24:46.909 [2024-05-13 18:38:02.653803] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:24:46.909 [2024-05-13 18:38:02.653853] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:24:46.909 [2024-05-13 18:38:02.653882] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:24:46.909 [2024-05-13 18:38:02.739920] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:24:46.909 [2024-05-13 18:38:02.753762] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:46.909 [2024-05-13 18:38:02.753806] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:46.909 [2024-05-13 18:38:02.753825] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:46.909 [2024-05-13 18:38:02.811215] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:24:46.909 [2024-05-13 18:38:02.811269] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:24:46.909 [2024-05-13 18:38:02.840413] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:24:47.167 [2024-05-13 18:38:02.899999] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:24:47.167 [2024-05-13 18:38:02.900054] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # sort 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@80 -- # xargs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@648 -- # local es=0 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 [2024-05-13 18:38:06.247353] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:24:50.452 2024/05/13 18:38:06 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:24:50.452 request: 00:24:50.452 { 00:24:50.452 "method": "bdev_nvme_start_mdns_discovery", 00:24:50.452 "params": { 00:24:50.452 "name": "cdc", 00:24:50.452 "svcname": "_nvme-disc._tcp", 00:24:50.452 "hostnqn": "nqn.2021-12.io.spdk:test" 00:24:50.452 } 00:24:50.452 } 00:24:50.452 Got JSON-RPC error response 00:24:50.452 GoRPCClient: error on JSON-RPC call 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # sort 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@76 -- # xargs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # sort 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@64 -- # xargs 00:24:50.452 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # kill 96295 00:24:50.453 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # wait 96295 00:24:50.711 [2024-05-13 18:38:06.490898] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:24:50.711 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # kill 96375 00:24:50.711 Got SIGTERM, quitting. 00:24:50.711 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # kill 96324 00:24:50.711 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:24:50.711 Got SIGTERM, quitting. 00:24:50.711 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:50.711 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@117 -- # sync 00:24:50.711 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:24:50.711 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:24:50.711 avahi-daemon 0.8 exiting. 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@120 -- # set +e 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:50.970 rmmod nvme_tcp 00:24:50.970 rmmod nvme_fabrics 00:24:50.970 rmmod nvme_keyring 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set -e 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # return 0 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@489 -- # '[' -n 96245 ']' 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@490 -- # killprocess 96245 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@946 -- # '[' -z 96245 ']' 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@950 -- # kill -0 96245 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # uname 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96245 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:50.970 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:50.970 killing process with pid 96245 00:24:50.971 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96245' 00:24:50.971 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@965 -- # kill 96245 00:24:50.971 [2024-05-13 18:38:06.755488] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:50.971 18:38:06 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@970 -- # wait 96245 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:51.229 00:24:51.229 real 0m20.865s 00:24:51.229 user 0m40.834s 00:24:51.229 sys 0m2.070s 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:51.229 18:38:07 nvmf_tcp.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:51.229 ************************************ 00:24:51.229 END TEST nvmf_mdns_discovery 00:24:51.229 ************************************ 00:24:51.229 18:38:07 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 1 -eq 1 ]] 00:24:51.229 18:38:07 nvmf_tcp -- nvmf/nvmf.sh@115 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:51.229 18:38:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:24:51.229 18:38:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:51.229 18:38:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:51.229 ************************************ 00:24:51.229 START TEST nvmf_host_multipath 00:24:51.229 ************************************ 00:24:51.229 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:24:51.488 * Looking for test storage... 00:24:51.488 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.488 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@47 -- # : 0 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@432 -- # nvmf_veth_init 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:51.489 Cannot find device "nvmf_tgt_br" 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@155 -- # true 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:51.489 Cannot find device "nvmf_tgt_br2" 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@156 -- # true 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:51.489 Cannot find device "nvmf_tgt_br" 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@158 -- # true 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:51.489 Cannot find device "nvmf_tgt_br2" 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@159 -- # true 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:51.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:24:51.489 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:51.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:51.490 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:51.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:51.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:24:51.748 00:24:51.748 --- 10.0.0.2 ping statistics --- 00:24:51.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.748 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:51.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:51.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:24:51.748 00:24:51.748 --- 10.0.0.3 ping statistics --- 00:24:51.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.748 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:51.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:51.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:24:51.748 00:24:51.748 --- 10.0.0.1 ping statistics --- 00:24:51.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:51.748 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@433 -- # return 0 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@481 -- # nvmfpid=96877 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@482 -- # waitforlisten 96877 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 96877 ']' 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:51.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:51.748 18:38:07 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:51.748 [2024-05-13 18:38:07.653256] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:24:51.748 [2024-05-13 18:38:07.653362] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.007 [2024-05-13 18:38:07.794548] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:52.007 [2024-05-13 18:38:07.921728] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.007 [2024-05-13 18:38:07.921995] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.007 [2024-05-13 18:38:07.922129] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.007 [2024-05-13 18:38:07.922184] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.007 [2024-05-13 18:38:07.922216] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.007 [2024-05-13 18:38:07.922465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:52.007 [2024-05-13 18:38:07.922475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=96877 00:24:52.943 18:38:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:53.201 [2024-05-13 18:38:09.030479] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.201 18:38:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:53.459 Malloc0 00:24:53.459 18:38:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:54.026 18:38:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.284 18:38:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.543 [2024-05-13 18:38:10.238107] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:54.543 [2024-05-13 18:38:10.238410] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.543 18:38:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:54.801 [2024-05-13 18:38:10.542546] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:54.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=96985 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 96985 /var/tmp/bdevperf.sock 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@827 -- # '[' -z 96985 ']' 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:54.801 18:38:10 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:24:55.737 18:38:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:55.737 18:38:11 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@860 -- # return 0 00:24:55.737 18:38:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:55.995 18:38:11 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:56.254 Nvme0n1 00:24:56.512 18:38:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:56.770 Nvme0n1 00:24:56.770 18:38:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:24:56.770 18:38:12 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:58.144 18:38:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:24:58.144 18:38:13 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:58.144 18:38:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:58.402 18:38:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:24:58.402 18:38:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97068 00:24:58.402 18:38:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:24:58.402 18:38:14 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96877 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:04.971 Attaching 4 probes... 00:25:04.971 @path[10.0.0.2, 4421]: 16640 00:25:04.971 @path[10.0.0.2, 4421]: 17139 00:25:04.971 @path[10.0.0.2, 4421]: 17231 00:25:04.971 @path[10.0.0.2, 4421]: 17432 00:25:04.971 @path[10.0.0.2, 4421]: 17594 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97068 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:04.971 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:05.228 18:38:20 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:05.486 18:38:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:05.486 18:38:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97203 00:25:05.486 18:38:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96877 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:05.486 18:38:21 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.049 Attaching 4 probes... 00:25:12.049 @path[10.0.0.2, 4420]: 16026 00:25:12.049 @path[10.0.0.2, 4420]: 16268 00:25:12.049 @path[10.0.0.2, 4420]: 16256 00:25:12.049 @path[10.0.0.2, 4420]: 16642 00:25:12.049 @path[10.0.0.2, 4420]: 16142 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97203 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:12.049 18:38:27 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:12.412 18:38:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:12.412 18:38:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97334 00:25:12.412 18:38:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96877 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:12.412 18:38:28 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:19.009 Attaching 4 probes... 00:25:19.009 @path[10.0.0.2, 4421]: 14389 00:25:19.009 @path[10.0.0.2, 4421]: 17115 00:25:19.009 @path[10.0.0.2, 4421]: 16572 00:25:19.009 @path[10.0.0.2, 4421]: 16255 00:25:19.009 @path[10.0.0.2, 4421]: 17179 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97334 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:19.009 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:19.268 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:19.268 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97466 00:25:19.268 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96877 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:19.268 18:38:34 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:25.892 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:25.893 Attaching 4 probes... 00:25:25.893 00:25:25.893 00:25:25.893 00:25:25.893 00:25:25.893 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97466 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97591 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:25.893 18:38:41 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96877 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:32.457 18:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:32.457 18:38:47 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:32.457 Attaching 4 probes... 00:25:32.457 @path[10.0.0.2, 4421]: 16775 00:25:32.457 @path[10.0.0.2, 4421]: 17075 00:25:32.457 @path[10.0.0.2, 4421]: 17091 00:25:32.457 @path[10.0.0.2, 4421]: 16956 00:25:32.457 @path[10.0.0.2, 4421]: 17138 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97591 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:32.457 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:32.457 [2024-05-13 18:38:48.359840] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.457 [2024-05-13 18:38:48.359899] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.457 [2024-05-13 18:38:48.359912] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.457 [2024-05-13 18:38:48.359921] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.457 [2024-05-13 18:38:48.359930] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.457 [2024-05-13 18:38:48.359940] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.457 [2024-05-13 18:38:48.359949] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.359959] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.359968] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.359977] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.359986] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.359995] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360004] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360013] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360022] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360030] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360039] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360048] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360056] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360065] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360073] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360082] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360091] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360100] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360108] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360117] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360126] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360134] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360144] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360153] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360161] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360170] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 [2024-05-13 18:38:48.360179] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ac4d0 is same with the state(5) to be set 00:25:32.458 18:38:48 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:25:33.833 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:25:33.833 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97726 00:25:33.833 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96877 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:33.833 18:38:49 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:40.393 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:40.393 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:40.393 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:40.394 Attaching 4 probes... 00:25:40.394 @path[10.0.0.2, 4420]: 16935 00:25:40.394 @path[10.0.0.2, 4420]: 17156 00:25:40.394 @path[10.0.0.2, 4420]: 17051 00:25:40.394 @path[10.0.0.2, 4420]: 17163 00:25:40.394 @path[10.0.0.2, 4420]: 16940 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97726 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:40.394 [2024-05-13 18:38:55.917246] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:40.394 18:38:55 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:40.394 18:38:56 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:25:46.951 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:25:46.951 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=97909 00:25:46.951 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:46.951 18:39:02 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96877 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.521 Attaching 4 probes... 00:25:53.521 @path[10.0.0.2, 4421]: 16691 00:25:53.521 @path[10.0.0.2, 4421]: 16734 00:25:53.521 @path[10.0.0.2, 4421]: 17059 00:25:53.521 @path[10.0.0.2, 4421]: 16961 00:25:53.521 @path[10.0.0.2, 4421]: 16987 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 97909 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 96985 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 96985 ']' 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 96985 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96985 00:25:53.521 killing process with pid 96985 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96985' 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 96985 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 96985 00:25:53.521 Connection closed with partial response: 00:25:53.521 00:25:53.521 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 96985 00:25:53.521 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:53.521 [2024-05-13 18:38:10.625399] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:25:53.521 [2024-05-13 18:38:10.625541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96985 ] 00:25:53.521 [2024-05-13 18:38:10.762354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.521 [2024-05-13 18:38:10.893807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:53.521 Running I/O for 90 seconds... 00:25:53.521 [2024-05-13 18:38:21.208526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.521 [2024-05-13 18:38:21.208623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.521 [2024-05-13 18:38:21.208701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.521 [2024-05-13 18:38:21.208726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.521 [2024-05-13 18:38:21.208750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.521 [2024-05-13 18:38:21.208766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.521 [2024-05-13 18:38:21.208788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.521 [2024-05-13 18:38:21.208804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.521 [2024-05-13 18:38:21.208825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.521 [2024-05-13 18:38:21.208840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.521 [2024-05-13 18:38:21.208868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.521 [2024-05-13 18:38:21.208884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.521 [2024-05-13 18:38:21.208905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.521 [2024-05-13 18:38:21.208920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.208942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.208957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.208978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.208994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:64744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.209975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:64752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.209989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:64776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:64800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.210973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.210995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:64824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:64864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:64912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.211629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.211654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.212244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.212288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:64976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.212326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.212363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.522 [2024-05-13 18:38:21.212399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.522 [2024-05-13 18:38:21.212804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.522 [2024-05-13 18:38:21.212819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.212841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.212856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.212878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.212893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.212916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.212931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.212959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.212975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.212996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.213975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.213997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.214012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.214049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.214086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.214130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:21.214170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:65000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:65024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:65032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:65048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:65056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:65072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:65104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:21.214745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.523 [2024-05-13 18:38:21.214761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:27.805409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:101240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:27.805494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:27.805586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:27.805617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:27.805643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:27.805662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:27.805685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:27.805700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:27.805722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.523 [2024-05-13 18:38:27.805737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.523 [2024-05-13 18:38:27.805760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.805775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.805797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.805812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.805835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.805850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.805903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.805920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.805942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.805957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.805979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.805994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:101400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.806979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.806996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.807020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:101416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.807036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.807059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.807075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.807098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:101432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.807113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.808623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:101440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.524 [2024-05-13 18:38:27.808654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.808711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.808738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.808773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.808801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.808836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.808863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.808905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.808928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.808968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.808994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.809934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.809969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.810013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.810043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.810083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.810129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.810172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.810199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.810240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.810266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.810307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.810331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.524 [2024-05-13 18:38:27.810372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.524 [2024-05-13 18:38:27.810398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.810930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.810969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.811948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.811987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.812957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.812980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:27.813020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:27.813048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:122984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:123000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:123016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.974494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.974510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:123032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:123048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:123064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:123080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:123096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.525 [2024-05-13 18:38:34.976806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:53.525 [2024-05-13 18:38:34.976833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.976849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.976874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.976890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.976920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.976936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.976960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:123144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.976976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:123192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:123248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:123280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.526 [2024-05-13 18:38:34.977901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.977945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.977972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:122592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.977987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:122608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:122656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:122672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:122744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.978972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.978998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:122824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:122832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:122848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:122864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.526 [2024-05-13 18:38:34.979591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:53.526 [2024-05-13 18:38:34.979618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.979971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.979986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.980012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.980028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.980054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.980068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:34.980098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:34.980114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.361051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:31472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:31480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:31488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:31496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:31504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:31512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:31520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:31528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:31536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:31544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:31552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:31560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:31568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:31576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:31584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:31592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.361804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.361834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.361863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:31232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.361892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.361921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.361950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:31256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.361979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.361995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:31264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:31272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:31304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:31312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:31320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:31328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.527 [2024-05-13 18:38:48.362309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.362338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:31608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.362367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:31616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.362397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.362433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.527 [2024-05-13 18:38:48.362448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:31632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.527 [2024-05-13 18:38:48.362462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:31640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:31648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:31656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:31664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:31672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:31688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:31704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:31712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:31720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:31728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:31736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:31744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:31752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:31760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.362970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.362985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:31768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:31776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:31784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:31800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:31808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:31816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:31824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:31848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:31856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:31864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:31872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:31880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:31888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:31896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:31904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:31912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:31928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:31936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:31944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:31952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:31968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:31976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:53.528 [2024-05-13 18:38:48.363803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.363854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31984 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.363868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.363904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.363915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:31992 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.363928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.363952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.363963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32000 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.363976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.363990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.364000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.364010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32008 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.364023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.364037] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.364047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.364058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32016 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.364071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.364085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.364095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.364105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32024 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.364127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.364141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.364152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.364162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32032 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.364175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.364189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.364199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.364214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32040 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.364228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.364241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.364252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.364262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32048 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.364276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.528 [2024-05-13 18:38:48.364289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.528 [2024-05-13 18:38:48.364304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.528 [2024-05-13 18:38:48.364315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32056 len:8 PRP1 0x0 PRP2 0x0 00:25:53.528 [2024-05-13 18:38:48.364328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364342] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364352] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32064 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32088 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364548] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32096 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32104 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364662] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32112 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32120 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364768] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32128 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364872] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32144 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32152 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.364962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.364975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.364986] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.364996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32160 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365023] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32168 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365076] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32176 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32184 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32192 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32200 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32208 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32216 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32224 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31352 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31360 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31368 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.365591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.365602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31376 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.365615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.365629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375399] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31384 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31392 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31400 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31408 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31416 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31424 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31432 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.529 [2024-05-13 18:38:48.375793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.529 [2024-05-13 18:38:48.375803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31440 len:8 PRP1 0x0 PRP2 0x0 00:25:53.529 [2024-05-13 18:38:48.375816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.529 [2024-05-13 18:38:48.375830] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.530 [2024-05-13 18:38:48.375840] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.530 [2024-05-13 18:38:48.375850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31448 len:8 PRP1 0x0 PRP2 0x0 00:25:53.530 [2024-05-13 18:38:48.375863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.530 [2024-05-13 18:38:48.375876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.530 [2024-05-13 18:38:48.375887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.530 [2024-05-13 18:38:48.375904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31456 len:8 PRP1 0x0 PRP2 0x0 00:25:53.530 [2024-05-13 18:38:48.375918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.530 [2024-05-13 18:38:48.375932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:53.530 [2024-05-13 18:38:48.375943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:53.530 [2024-05-13 18:38:48.375953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:31464 len:8 PRP1 0x0 PRP2 0x0 00:25:53.530 [2024-05-13 18:38:48.375966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.530 [2024-05-13 18:38:48.376028] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ba8940 was disconnected and freed. reset controller. 00:25:53.530 [2024-05-13 18:38:48.376153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.530 [2024-05-13 18:38:48.376180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.530 [2024-05-13 18:38:48.376196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.530 [2024-05-13 18:38:48.376209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.530 [2024-05-13 18:38:48.376223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.530 [2024-05-13 18:38:48.376238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.530 [2024-05-13 18:38:48.376252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:53.530 [2024-05-13 18:38:48.376266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:53.530 [2024-05-13 18:38:48.376279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28d00 is same with the state(5) to be set 00:25:53.530 [2024-05-13 18:38:48.378158] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.530 [2024-05-13 18:38:48.378215] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b28d00 (9): Bad file descriptor 00:25:53.530 [2024-05-13 18:38:48.378405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.530 [2024-05-13 18:38:48.378488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.530 [2024-05-13 18:38:48.378521] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b28d00 with addr=10.0.0.2, port=4421 00:25:53.530 [2024-05-13 18:38:48.378544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b28d00 is same with the state(5) to be set 00:25:53.530 [2024-05-13 18:38:48.378598] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b28d00 (9): Bad file descriptor 00:25:53.530 [2024-05-13 18:38:48.378633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:53.530 [2024-05-13 18:38:48.378653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:53.530 [2024-05-13 18:38:48.378674] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:53.530 [2024-05-13 18:38:48.378709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:53.530 [2024-05-13 18:38:48.378729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:53.530 [2024-05-13 18:38:58.481516] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:53.530 Received shutdown signal, test time was about 55.794833 seconds 00:25:53.530 00:25:53.530 Latency(us) 00:25:53.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.530 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:53.530 Verification LBA range: start 0x0 length 0x4000 00:25:53.530 Nvme0n1 : 55.79 7254.40 28.34 0.00 0.00 17611.20 359.33 7046430.72 00:25:53.530 =================================================================================================================== 00:25:53.530 Total : 7254.40 28.34 0.00 0.00 17611.20 359.33 7046430.72 00:25:53.530 18:39:08 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@117 -- # sync 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@120 -- # set +e 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:53.530 rmmod nvme_tcp 00:25:53.530 rmmod nvme_fabrics 00:25:53.530 rmmod nvme_keyring 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@124 -- # set -e 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@125 -- # return 0 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@489 -- # '[' -n 96877 ']' 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@490 -- # killprocess 96877 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@946 -- # '[' -z 96877 ']' 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@950 -- # kill -0 96877 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # uname 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 96877 00:25:53.530 killing process with pid 96877 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@964 -- # echo 'killing process with pid 96877' 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@965 -- # kill 96877 00:25:53.530 [2024-05-13 18:39:09.180768] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:53.530 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@970 -- # wait 96877 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:53.788 00:25:53.788 real 1m2.401s 00:25:53.788 user 2m56.720s 00:25:53.788 sys 0m14.168s 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:53.788 ************************************ 00:25:53.788 END TEST nvmf_host_multipath 00:25:53.788 ************************************ 00:25:53.788 18:39:09 nvmf_tcp.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:53.788 18:39:09 nvmf_tcp -- nvmf/nvmf.sh@116 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:53.788 18:39:09 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:25:53.788 18:39:09 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:53.788 18:39:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:53.788 ************************************ 00:25:53.788 START TEST nvmf_timeout 00:25:53.788 ************************************ 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:25:53.788 * Looking for test storage... 00:25:53.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.788 18:39:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@47 -- # : 0 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@432 -- # nvmf_veth_init 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:53.789 Cannot find device "nvmf_tgt_br" 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@155 -- # true 00:25:53.789 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:54.047 Cannot find device "nvmf_tgt_br2" 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@156 -- # true 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:54.047 Cannot find device "nvmf_tgt_br" 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@158 -- # true 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:54.047 Cannot find device "nvmf_tgt_br2" 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@159 -- # true 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:54.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:54.047 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:54.047 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:54.048 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:54.048 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:54.306 18:39:09 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:54.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:25:54.306 00:25:54.306 --- 10.0.0.2 ping statistics --- 00:25:54.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.306 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:54.306 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.306 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:25:54.306 00:25:54.306 --- 10.0.0.3 ping statistics --- 00:25:54.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.306 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:54.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:25:54.306 00:25:54.306 --- 10.0.0.1 ping statistics --- 00:25:54.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.306 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@433 -- # return 0 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@720 -- # xtrace_disable 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@481 -- # nvmfpid=98230 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@482 -- # waitforlisten 98230 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 98230 ']' 00:25:54.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:54.306 18:39:10 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:54.306 [2024-05-13 18:39:10.123302] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:25:54.306 [2024-05-13 18:39:10.123422] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.563 [2024-05-13 18:39:10.264222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:54.563 [2024-05-13 18:39:10.388836] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.563 [2024-05-13 18:39:10.388914] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.563 [2024-05-13 18:39:10.388929] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.563 [2024-05-13 18:39:10.388940] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.563 [2024-05-13 18:39:10.388949] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.563 [2024-05-13 18:39:10.389114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.563 [2024-05-13 18:39:10.389126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.497 18:39:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:55.755 [2024-05-13 18:39:11.443700] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.755 18:39:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:56.013 Malloc0 00:25:56.013 18:39:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.272 18:39:11 nvmf_tcp.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.529 18:39:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.786 [2024-05-13 18:39:12.526484] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:56.786 [2024-05-13 18:39:12.526765] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.786 18:39:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=98321 00:25:56.786 18:39:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:25:56.786 18:39:12 nvmf_tcp.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 98321 /var/tmp/bdevperf.sock 00:25:56.786 18:39:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 98321 ']' 00:25:56.786 18:39:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:56.786 18:39:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:56.786 18:39:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:56.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:56.787 18:39:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:56.787 18:39:12 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.787 [2024-05-13 18:39:12.607404] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:25:56.787 [2024-05-13 18:39:12.607540] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98321 ] 00:25:57.045 [2024-05-13 18:39:12.753254] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.045 [2024-05-13 18:39:12.869550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:57.979 18:39:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:57.979 18:39:13 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:25:57.979 18:39:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:58.238 18:39:13 nvmf_tcp.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:25:58.496 NVMe0n1 00:25:58.496 18:39:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=98369 00:25:58.496 18:39:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:58.496 18:39:14 nvmf_tcp.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:25:58.496 Running I/O for 10 seconds... 00:25:59.431 18:39:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.694 [2024-05-13 18:39:15.530831] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530896] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530908] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530917] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530925] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530934] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530943] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530951] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530960] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530968] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530977] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530985] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.530993] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531001] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531009] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531017] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531025] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531033] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531041] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531049] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531056] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531064] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531074] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531082] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531091] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531099] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531107] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531115] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531123] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531131] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531139] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531147] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531156] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531164] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531173] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531181] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531189] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531197] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531206] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531214] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531222] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531230] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531238] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531246] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531254] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531262] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531270] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531278] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531286] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531294] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531302] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531310] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531319] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531327] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531335] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531343] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531351] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531359] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531367] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531375] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.531383] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce6170 is same with the state(5) to be set 00:25:59.694 [2024-05-13 18:39:15.532415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.694 [2024-05-13 18:39:15.532457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.694 [2024-05-13 18:39:15.532480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.694 [2024-05-13 18:39:15.532491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.694 [2024-05-13 18:39:15.532504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.694 [2024-05-13 18:39:15.532513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.694 [2024-05-13 18:39:15.532525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.694 [2024-05-13 18:39:15.532534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.694 [2024-05-13 18:39:15.532545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.694 [2024-05-13 18:39:15.532554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.694 [2024-05-13 18:39:15.532844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.532870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.532884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.532894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.532905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.532914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.532926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.532935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.532947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.532956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.532968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.532977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.533863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.533988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.534309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.534322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.534434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.695 [2024-05-13 18:39:15.534449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.534461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.534471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.534482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.534491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.534502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.534836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.534853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.534977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.534994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.535750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.535761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.536015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.536031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.536041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.536053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.536063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.536074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.536083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.536094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.536104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.536396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.536422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.695 [2024-05-13 18:39:15.536432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.695 [2024-05-13 18:39:15.536444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.536454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.536466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.536476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.536748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:80648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.536772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.536787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.536797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.536809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.536819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.536830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.536839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.536850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.536860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.537107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.537128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.537140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.537150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.537161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.537171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.537182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.537191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.537212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.537348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.537499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.537765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.537790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.537926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.538048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.538068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.538080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.538218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.538363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.538490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.538508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.538629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.538651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.538793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.538934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.696 [2024-05-13 18:39:15.539702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.539826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.539977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.540787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.540797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.541015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.541035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.696 [2024-05-13 18:39:15.541048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.696 [2024-05-13 18:39:15.541058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.541080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.541101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.541121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.541251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.541533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.541558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.541594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.541615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.541636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.541647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.541987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.542847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.542856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.697 [2024-05-13 18:39:15.543692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.543713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.543724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.543981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.544010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.544261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.544281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.544291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.544302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.544442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.544665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:80440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.697 [2024-05-13 18:39:15.544699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.697 [2024-05-13 18:39:15.544735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.697 [2024-05-13 18:39:15.544745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.698 [2024-05-13 18:39:15.544754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80448 len:8 PRP1 0x0 PRP2 0x0 00:25:59.698 [2024-05-13 18:39:15.544763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.698 [2024-05-13 18:39:15.545040] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb640b0 was disconnected and freed. reset controller. 00:25:59.698 [2024-05-13 18:39:15.545274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.698 [2024-05-13 18:39:15.545392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.698 [2024-05-13 18:39:15.545406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.698 [2024-05-13 18:39:15.545415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.698 [2024-05-13 18:39:15.545425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.698 [2024-05-13 18:39:15.545682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.698 [2024-05-13 18:39:15.545705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.698 [2024-05-13 18:39:15.545716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.698 [2024-05-13 18:39:15.545726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3490 is same with the state(5) to be set 00:25:59.698 [2024-05-13 18:39:15.546201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.698 [2024-05-13 18:39:15.546239] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf3490 (9): Bad file descriptor 00:25:59.698 [2024-05-13 18:39:15.546588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.698 [2024-05-13 18:39:15.546770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.698 [2024-05-13 18:39:15.546876] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3490 with addr=10.0.0.2, port=4420 00:25:59.698 [2024-05-13 18:39:15.546890] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3490 is same with the state(5) to be set 00:25:59.698 [2024-05-13 18:39:15.546912] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf3490 (9): Bad file descriptor 00:25:59.698 [2024-05-13 18:39:15.546929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:59.698 [2024-05-13 18:39:15.546939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:59.698 [2024-05-13 18:39:15.546949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:59.698 [2024-05-13 18:39:15.546970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.698 [2024-05-13 18:39:15.547224] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:59.698 18:39:15 nvmf_tcp.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:02.227 [2024-05-13 18:39:17.547607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.227 [2024-05-13 18:39:17.547724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.227 [2024-05-13 18:39:17.547745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3490 with addr=10.0.0.2, port=4420 00:26:02.227 [2024-05-13 18:39:17.547760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3490 is same with the state(5) to be set 00:26:02.227 [2024-05-13 18:39:17.547790] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf3490 (9): Bad file descriptor 00:26:02.227 [2024-05-13 18:39:17.547811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:02.227 [2024-05-13 18:39:17.547820] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:02.227 [2024-05-13 18:39:17.547832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:02.227 [2024-05-13 18:39:17.547860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:02.227 [2024-05-13 18:39:17.547872] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:02.227 18:39:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:02.227 18:39:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:02.227 18:39:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:02.227 18:39:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:02.227 18:39:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:02.227 18:39:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:02.227 18:39:17 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:02.227 18:39:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:02.227 18:39:18 nvmf_tcp.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:04.126 [2024-05-13 18:39:19.548183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.126 [2024-05-13 18:39:19.548299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.126 [2024-05-13 18:39:19.548320] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaf3490 with addr=10.0.0.2, port=4420 00:26:04.126 [2024-05-13 18:39:19.548335] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaf3490 is same with the state(5) to be set 00:26:04.126 [2024-05-13 18:39:19.548365] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf3490 (9): Bad file descriptor 00:26:04.126 [2024-05-13 18:39:19.548385] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:04.126 [2024-05-13 18:39:19.548395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:04.126 [2024-05-13 18:39:19.548405] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.126 [2024-05-13 18:39:19.548433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:04.126 [2024-05-13 18:39:19.548446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:06.025 [2024-05-13 18:39:21.548501] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:06.958 00:26:06.958 Latency(us) 00:26:06.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.958 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:06.958 Verification LBA range: start 0x0 length 0x4000 00:26:06.958 NVMe0n1 : 8.13 1231.23 4.81 15.75 0.00 102740.83 2204.39 7046430.72 00:26:06.958 =================================================================================================================== 00:26:06.958 Total : 1231.23 4.81 15.75 0.00 102740.83 2204.39 7046430.72 00:26:06.958 0 00:26:07.216 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:07.216 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:07.216 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:07.473 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:07.473 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:07.473 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:07.473 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@65 -- # wait 98369 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 98321 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 98321 ']' 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 98321 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98321 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:07.731 killing process with pid 98321 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98321' 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 98321 00:26:07.731 18:39:23 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 98321 00:26:07.731 Received shutdown signal, test time was about 9.160772 seconds 00:26:07.731 00:26:07.731 Latency(us) 00:26:07.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:07.731 =================================================================================================================== 00:26:07.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:07.987 18:39:23 nvmf_tcp.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.244 [2024-05-13 18:39:24.048499] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.244 18:39:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=98527 00:26:08.244 18:39:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:08.245 18:39:24 nvmf_tcp.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 98527 /var/tmp/bdevperf.sock 00:26:08.245 18:39:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 98527 ']' 00:26:08.245 18:39:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.245 18:39:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:08.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.245 18:39:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.245 18:39:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:08.245 18:39:24 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:08.245 [2024-05-13 18:39:24.121780] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:08.245 [2024-05-13 18:39:24.121883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98527 ] 00:26:08.502 [2024-05-13 18:39:24.257210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.502 [2024-05-13 18:39:24.384824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:09.499 18:39:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:09.499 18:39:25 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:09.499 18:39:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:09.499 18:39:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:09.757 NVMe0n1 00:26:10.014 18:39:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.014 18:39:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=98569 00:26:10.014 18:39:25 nvmf_tcp.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:10.014 Running I/O for 10 seconds... 00:26:10.960 18:39:26 nvmf_tcp.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.221 [2024-05-13 18:39:26.995945] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996013] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996025] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996034] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996044] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996052] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996062] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996070] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996079] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996087] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996096] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996105] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996113] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996121] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996130] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996138] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996146] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996154] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996162] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996171] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996179] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996188] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996196] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996204] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996212] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996220] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996228] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996236] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996246] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996254] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996263] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996271] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996281] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996289] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996298] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996306] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996314] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996323] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.221 [2024-05-13 18:39:26.996331] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996340] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996349] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996357] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996366] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996374] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996383] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996391] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996399] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996408] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996416] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996424] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996432] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996440] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996448] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996455] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996463] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996472] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996480] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996488] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996495] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996503] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996511] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996519] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996527] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996535] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996543] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996551] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996559] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996566] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996591] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996600] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996608] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996617] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996626] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996634] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996642] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996650] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996659] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996667] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996687] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996696] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996704] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996713] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996722] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996730] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.996738] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee2aa0 is same with the state(5) to be set 00:26:11.222 [2024-05-13 18:39:26.997237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.222 [2024-05-13 18:39:26.997693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.222 [2024-05-13 18:39:26.997704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.997986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.997998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.223 [2024-05-13 18:39:26.998445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.223 [2024-05-13 18:39:26.998466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.223 [2024-05-13 18:39:26.998491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.223 [2024-05-13 18:39:26.998512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.223 [2024-05-13 18:39:26.998534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.223 [2024-05-13 18:39:26.998554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.223 [2024-05-13 18:39:26.998565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.998981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.998990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.224 [2024-05-13 18:39:26.999360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-05-13 18:39:26.999380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-05-13 18:39:26.999401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-05-13 18:39:26.999421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.224 [2024-05-13 18:39:26.999442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.224 [2024-05-13 18:39:26.999453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:75040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.225 [2024-05-13 18:39:26.999757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:75048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.225 [2024-05-13 18:39:26.999777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:75056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.225 [2024-05-13 18:39:26.999797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:75064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.225 [2024-05-13 18:39:26.999817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:75072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.225 [2024-05-13 18:39:26.999837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.225 [2024-05-13 18:39:26.999863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:75088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:11.225 [2024-05-13 18:39:26.999883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:26.999986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:26.999997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:11.225 [2024-05-13 18:39:27.000006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:27.000016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5840b0 is same with the state(5) to be set 00:26:11.225 [2024-05-13 18:39:27.000030] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:11.225 [2024-05-13 18:39:27.000038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:11.225 [2024-05-13 18:39:27.000047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74696 len:8 PRP1 0x0 PRP2 0x0 00:26:11.225 [2024-05-13 18:39:27.000056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:27.001768] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5840b0 was disconnected and freed. reset controller. 00:26:11.225 [2024-05-13 18:39:27.001933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.225 [2024-05-13 18:39:27.001953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:27.001966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.225 [2024-05-13 18:39:27.001976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:27.001986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.225 [2024-05-13 18:39:27.001995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:27.002004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:11.225 [2024-05-13 18:39:27.002013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:11.225 [2024-05-13 18:39:27.002023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:11.226 [2024-05-13 18:39:27.002272] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.226 [2024-05-13 18:39:27.002298] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:11.226 [2024-05-13 18:39:27.002405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.226 [2024-05-13 18:39:27.002457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:11.226 [2024-05-13 18:39:27.002474] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513490 with addr=10.0.0.2, port=4420 00:26:11.226 [2024-05-13 18:39:27.002485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:11.226 [2024-05-13 18:39:27.002504] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:11.226 [2024-05-13 18:39:27.002521] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:11.226 [2024-05-13 18:39:27.002530] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:11.226 [2024-05-13 18:39:27.002540] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:11.226 [2024-05-13 18:39:27.016871] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:11.226 [2024-05-13 18:39:27.016924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:11.226 18:39:27 nvmf_tcp.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:12.158 [2024-05-13 18:39:28.017116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.159 [2024-05-13 18:39:28.017231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:12.159 [2024-05-13 18:39:28.017251] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513490 with addr=10.0.0.2, port=4420 00:26:12.159 [2024-05-13 18:39:28.017266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:12.159 [2024-05-13 18:39:28.017294] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:12.159 [2024-05-13 18:39:28.017329] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:12.159 [2024-05-13 18:39:28.017341] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:12.159 [2024-05-13 18:39:28.017352] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:12.159 [2024-05-13 18:39:28.017382] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:12.159 [2024-05-13 18:39:28.017395] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:12.159 18:39:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:12.485 [2024-05-13 18:39:28.318114] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.485 18:39:28 nvmf_tcp.nvmf_timeout -- host/timeout.sh@92 -- # wait 98569 00:26:13.416 [2024-05-13 18:39:29.037898] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:20.032 00:26:20.032 Latency(us) 00:26:20.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.032 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:20.032 Verification LBA range: start 0x0 length 0x4000 00:26:20.032 NVMe0n1 : 10.01 6293.81 24.59 0.00 0.00 20298.68 2055.45 3035150.89 00:26:20.033 =================================================================================================================== 00:26:20.033 Total : 6293.81 24.59 0.00 0.00 20298.68 2055.45 3035150.89 00:26:20.033 0 00:26:20.033 18:39:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=98682 00:26:20.033 18:39:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:20.033 18:39:35 nvmf_tcp.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:20.290 Running I/O for 10 seconds... 00:26:21.228 18:39:36 nvmf_tcp.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.228 [2024-05-13 18:39:37.107835] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107902] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107918] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107928] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107936] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107945] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107954] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107962] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107971] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107980] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107988] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.107997] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108005] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108014] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108022] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108030] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108038] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108046] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108055] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108063] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108071] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108079] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108087] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108095] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108103] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108111] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108119] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108127] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108136] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108145] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108153] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108161] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108170] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108178] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108188] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108197] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108206] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108214] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108223] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee3b80 is same with the state(5) to be set 00:26:21.228 [2024-05-13 18:39:37.108511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.228 [2024-05-13 18:39:37.108918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.228 [2024-05-13 18:39:37.108929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.108939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.108951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.108961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.108973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.108983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.108994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:76424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:76504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.229 [2024-05-13 18:39:37.109452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:76648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.229 [2024-05-13 18:39:37.109773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.229 [2024-05-13 18:39:37.109782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.109985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.109995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:76992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.230 [2024-05-13 18:39:37.110848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.230 [2024-05-13 18:39:37.110861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.110870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.110882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.110892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.110903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.110913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.110924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.110933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.110943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.110953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.110965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.110974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.110985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.110995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.231 [2024-05-13 18:39:37.111568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x562ed0 is same with the state(5) to be set 00:26:21.231 [2024-05-13 18:39:37.111609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.231 [2024-05-13 18:39:37.111618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.231 [2024-05-13 18:39:37.111626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77256 len:8 PRP1 0x0 PRP2 0x0 00:26:21.231 [2024-05-13 18:39:37.111636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111731] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x562ed0 was disconnected and freed. reset controller. 00:26:21.231 [2024-05-13 18:39:37.111823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.231 [2024-05-13 18:39:37.111838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.231 [2024-05-13 18:39:37.111858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.231 [2024-05-13 18:39:37.111877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.231 [2024-05-13 18:39:37.111896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.231 [2024-05-13 18:39:37.111904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:21.231 [2024-05-13 18:39:37.113602] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.231 [2024-05-13 18:39:37.113642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:21.231 [2024-05-13 18:39:37.113772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.231 [2024-05-13 18:39:37.113829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.232 [2024-05-13 18:39:37.113847] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513490 with addr=10.0.0.2, port=4420 00:26:21.232 [2024-05-13 18:39:37.113858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:21.232 [2024-05-13 18:39:37.113877] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:21.232 [2024-05-13 18:39:37.113894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:21.232 [2024-05-13 18:39:37.113903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:21.232 [2024-05-13 18:39:37.113913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:21.232 [2024-05-13 18:39:37.113935] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.232 [2024-05-13 18:39:37.113946] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:21.232 18:39:37 nvmf_tcp.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:22.608 [2024-05-13 18:39:38.114120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-13 18:39:38.114228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:22.608 [2024-05-13 18:39:38.114249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513490 with addr=10.0.0.2, port=4420 00:26:22.608 [2024-05-13 18:39:38.114263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:22.608 [2024-05-13 18:39:38.114292] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:22.608 [2024-05-13 18:39:38.114327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:22.608 [2024-05-13 18:39:38.114339] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:22.608 [2024-05-13 18:39:38.114350] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:22.608 [2024-05-13 18:39:38.114379] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:22.608 [2024-05-13 18:39:38.114392] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:23.175 [2024-05-13 18:39:39.114544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.175 [2024-05-13 18:39:39.114652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:23.175 [2024-05-13 18:39:39.114674] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513490 with addr=10.0.0.2, port=4420 00:26:23.175 [2024-05-13 18:39:39.114688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:23.175 [2024-05-13 18:39:39.114723] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:23.175 [2024-05-13 18:39:39.114742] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:23.175 [2024-05-13 18:39:39.114753] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:23.175 [2024-05-13 18:39:39.114764] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:23.175 [2024-05-13 18:39:39.114792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:23.175 [2024-05-13 18:39:39.114805] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.192 [2024-05-13 18:39:40.118421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.192 [2024-05-13 18:39:40.118515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:24.192 [2024-05-13 18:39:40.118535] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x513490 with addr=10.0.0.2, port=4420 00:26:24.192 [2024-05-13 18:39:40.118550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x513490 is same with the state(5) to be set 00:26:24.192 [2024-05-13 18:39:40.118834] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x513490 (9): Bad file descriptor 00:26:24.192 [2024-05-13 18:39:40.119086] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:24.192 [2024-05-13 18:39:40.119100] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:24.192 [2024-05-13 18:39:40.119111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:24.451 [2024-05-13 18:39:40.123537] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:24.451 [2024-05-13 18:39:40.123585] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:24.451 18:39:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:24.709 [2024-05-13 18:39:40.437773] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:24.709 18:39:40 nvmf_tcp.nvmf_timeout -- host/timeout.sh@103 -- # wait 98682 00:26:25.275 [2024-05-13 18:39:41.160723] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:30.541 00:26:30.541 Latency(us) 00:26:30.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.541 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:30.541 Verification LBA range: start 0x0 length 0x4000 00:26:30.541 NVMe0n1 : 10.01 5361.77 20.94 3569.25 0.00 14293.99 647.91 3019898.88 00:26:30.541 =================================================================================================================== 00:26:30.541 Total : 5361.77 20.94 3569.25 0.00 14293.99 0.00 3019898.88 00:26:30.541 0 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 98527 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 98527 ']' 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 98527 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98527 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:30.541 killing process with pid 98527 00:26:30.541 Received shutdown signal, test time was about 10.000000 seconds 00:26:30.541 00:26:30.541 Latency(us) 00:26:30.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.541 =================================================================================================================== 00:26:30.541 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98527' 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 98527 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 98527 00:26:30.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=98809 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 98809 /var/tmp/bdevperf.sock 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@827 -- # '[' -z 98809 ']' 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:30.541 18:39:46 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:30.541 [2024-05-13 18:39:46.383984] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:30.541 [2024-05-13 18:39:46.384085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98809 ] 00:26:30.799 [2024-05-13 18:39:46.519048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.799 [2024-05-13 18:39:46.638261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.733 18:39:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:31.733 18:39:47 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@860 -- # return 0 00:26:31.733 18:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98809 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:26:31.733 18:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=98837 00:26:31.733 18:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:26:31.733 18:39:47 nvmf_tcp.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:32.348 NVMe0n1 00:26:32.348 18:39:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:32.348 18:39:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=98885 00:26:32.348 18:39:48 nvmf_tcp.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:26:32.348 Running I/O for 10 seconds... 00:26:33.286 18:39:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:33.548 [2024-05-13 18:39:49.293952] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.293999] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294011] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294020] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294029] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294037] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294046] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294054] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294063] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294071] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294079] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294087] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294095] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294103] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294111] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294119] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294126] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294136] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294148] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294160] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294173] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294185] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294197] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294221] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294234] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294245] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294253] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294261] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294270] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294278] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294286] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294299] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294312] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294326] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294335] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294344] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294352] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294360] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294368] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294377] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294385] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294397] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294410] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294424] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294437] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294450] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294462] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294470] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294478] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294487] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294495] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294503] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294511] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294519] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294527] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294534] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294542] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294550] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294558] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294591] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294610] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294635] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294649] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294658] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294666] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294674] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294684] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294692] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294700] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294709] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294717] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294725] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294733] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.548 [2024-05-13 18:39:49.294741] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294749] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294758] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294772] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294786] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294797] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294807] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294815] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294823] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294831] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294839] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294847] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294855] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294864] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294871] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294879] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294888] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294896] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294904] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294912] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294922] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294930] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294938] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294946] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294956] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294964] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294972] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294980] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294989] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.294997] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295005] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295013] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295020] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295028] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295036] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295044] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295051] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295059] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295067] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295075] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295083] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295091] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295099] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295106] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295114] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295122] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295129] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295139] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295147] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295155] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295163] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295172] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295180] tcp.c:1595:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xee5f40 is same with the state(5) to be set 00:26:33.549 [2024-05-13 18:39:49.295669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:111696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:51944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.549 [2024-05-13 18:39:49.295943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.549 [2024-05-13 18:39:49.295954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.295963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.295975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.295984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.295995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:128584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:74952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:75056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:73288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:47920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.550 [2024-05-13 18:39:49.296668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.550 [2024-05-13 18:39:49.296690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.296986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.296998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:1112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:125232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:28160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:125464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:65352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.297454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.297466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.551 [2024-05-13 18:39:49.298533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.551 [2024-05-13 18:39:49.298558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:86136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:124464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:31480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.298979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.298990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.552 [2024-05-13 18:39:49.299237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.552 [2024-05-13 18:39:49.299249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.553 [2024-05-13 18:39:49.299517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:33.553 [2024-05-13 18:39:49.299562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:33.553 [2024-05-13 18:39:49.299581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126624 len:8 PRP1 0x0 PRP2 0x0 00:26:33.553 [2024-05-13 18:39:49.299592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299654] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x25290b0 was disconnected and freed. reset controller. 00:26:33.553 [2024-05-13 18:39:49.299786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.553 [2024-05-13 18:39:49.299806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.553 [2024-05-13 18:39:49.299827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.553 [2024-05-13 18:39:49.299847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.553 [2024-05-13 18:39:49.299866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.553 [2024-05-13 18:39:49.299877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b8490 is same with the state(5) to be set 00:26:33.553 [2024-05-13 18:39:49.300126] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:33.553 [2024-05-13 18:39:49.300152] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b8490 (9): Bad file descriptor 00:26:33.553 [2024-05-13 18:39:49.300265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.553 [2024-05-13 18:39:49.300318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.553 [2024-05-13 18:39:49.300335] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b8490 with addr=10.0.0.2, port=4420 00:26:33.553 [2024-05-13 18:39:49.300346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b8490 is same with the state(5) to be set 00:26:33.553 [2024-05-13 18:39:49.300370] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b8490 (9): Bad file descriptor 00:26:33.553 [2024-05-13 18:39:49.300386] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:33.553 [2024-05-13 18:39:49.300396] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:33.553 [2024-05-13 18:39:49.300406] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:33.553 [2024-05-13 18:39:49.300432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.553 18:39:49 nvmf_tcp.nvmf_timeout -- host/timeout.sh@128 -- # wait 98885 00:26:33.553 [2024-05-13 18:39:49.322416] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:35.451 [2024-05-13 18:39:51.322727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.451 [2024-05-13 18:39:51.322845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.451 [2024-05-13 18:39:51.322865] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b8490 with addr=10.0.0.2, port=4420 00:26:35.451 [2024-05-13 18:39:51.322879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b8490 is same with the state(5) to be set 00:26:35.451 [2024-05-13 18:39:51.322909] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b8490 (9): Bad file descriptor 00:26:35.451 [2024-05-13 18:39:51.322929] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:35.451 [2024-05-13 18:39:51.322939] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:35.451 [2024-05-13 18:39:51.322950] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:35.451 [2024-05-13 18:39:51.322979] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:35.451 [2024-05-13 18:39:51.322991] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:37.981 [2024-05-13 18:39:53.323202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.981 [2024-05-13 18:39:53.323311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.981 [2024-05-13 18:39:53.323332] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24b8490 with addr=10.0.0.2, port=4420 00:26:37.981 [2024-05-13 18:39:53.323347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24b8490 is same with the state(5) to be set 00:26:37.981 [2024-05-13 18:39:53.323376] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24b8490 (9): Bad file descriptor 00:26:37.981 [2024-05-13 18:39:53.323395] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:37.981 [2024-05-13 18:39:53.323405] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:37.981 [2024-05-13 18:39:53.323416] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:37.981 [2024-05-13 18:39:53.323447] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:37.981 [2024-05-13 18:39:53.323459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:39.900 [2024-05-13 18:39:55.323585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:40.467 00:26:40.467 Latency(us) 00:26:40.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.467 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:26:40.467 NVMe0n1 : 8.17 2611.75 10.20 15.67 0.00 48654.59 2398.02 7015926.69 00:26:40.467 =================================================================================================================== 00:26:40.467 Total : 2611.75 10.20 15.67 0.00 48654.59 2398.02 7015926.69 00:26:40.467 0 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:40.467 Attaching 5 probes... 00:26:40.467 1416.167807: reset bdev controller NVMe0 00:26:40.467 1416.241942: reconnect bdev controller NVMe0 00:26:40.467 3438.598078: reconnect delay bdev controller NVMe0 00:26:40.467 3438.625766: reconnect bdev controller NVMe0 00:26:40.467 5439.082554: reconnect delay bdev controller NVMe0 00:26:40.467 5439.111067: reconnect bdev controller NVMe0 00:26:40.467 7439.557135: reconnect delay bdev controller NVMe0 00:26:40.467 7439.592723: reconnect bdev controller NVMe0 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@136 -- # kill 98837 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 98809 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 98809 ']' 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 98809 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98809 00:26:40.467 killing process with pid 98809 00:26:40.467 Received shutdown signal, test time was about 8.224399 seconds 00:26:40.467 00:26:40.467 Latency(us) 00:26:40.467 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:40.467 =================================================================================================================== 00:26:40.467 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98809' 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 98809 00:26:40.467 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 98809 00:26:40.726 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@117 -- # sync 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@120 -- # set +e 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.985 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.985 rmmod nvme_tcp 00:26:40.985 rmmod nvme_fabrics 00:26:41.244 rmmod nvme_keyring 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@124 -- # set -e 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@125 -- # return 0 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@489 -- # '[' -n 98230 ']' 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@490 -- # killprocess 98230 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@946 -- # '[' -z 98230 ']' 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@950 -- # kill -0 98230 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # uname 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 98230 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:41.244 killing process with pid 98230 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@964 -- # echo 'killing process with pid 98230' 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@965 -- # kill 98230 00:26:41.244 [2024-05-13 18:39:56.973394] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:41.244 18:39:56 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@970 -- # wait 98230 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:41.504 00:26:41.504 real 0m47.717s 00:26:41.504 user 2m20.437s 00:26:41.504 sys 0m5.115s 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:41.504 18:39:57 nvmf_tcp.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.504 ************************************ 00:26:41.504 END TEST nvmf_timeout 00:26:41.504 ************************************ 00:26:41.504 18:39:57 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ virt == phy ]] 00:26:41.504 18:39:57 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:26:41.504 18:39:57 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.504 18:39:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.504 18:39:57 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:26:41.504 00:26:41.504 real 20m18.062s 00:26:41.504 user 62m55.181s 00:26:41.504 sys 4m17.754s 00:26:41.504 18:39:57 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:41.504 ************************************ 00:26:41.504 END TEST nvmf_tcp 00:26:41.504 18:39:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.504 ************************************ 00:26:41.504 18:39:57 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:41.504 18:39:57 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:41.504 18:39:57 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:41.504 18:39:57 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:41.504 18:39:57 -- common/autotest_common.sh@10 -- # set +x 00:26:41.504 ************************************ 00:26:41.504 START TEST spdkcli_nvmf_tcp 00:26:41.504 ************************************ 00:26:41.504 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:41.762 * Looking for test storage... 00:26:41.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.762 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=99110 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 99110 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 99110 ']' 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:41.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:41.763 18:39:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.763 [2024-05-13 18:39:57.571883] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:26:41.763 [2024-05-13 18:39:57.571974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99110 ] 00:26:41.763 [2024-05-13 18:39:57.702888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:42.021 [2024-05-13 18:39:57.849211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.021 [2024-05-13 18:39:57.849230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.955 18:39:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:42.955 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:42.955 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:42.955 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:42.955 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:42.955 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:42.955 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:42.955 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:42.955 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:42.955 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:42.955 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:42.955 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:42.955 ' 00:26:45.530 [2024-05-13 18:40:01.285318] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.906 [2024-05-13 18:40:02.566144] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:46.906 [2024-05-13 18:40:02.566483] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:49.436 [2024-05-13 18:40:04.924028] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:51.333 [2024-05-13 18:40:06.981473] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:52.742 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:52.742 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:52.743 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:52.743 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:52.743 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:52.743 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:52.743 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:52.743 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:52.743 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:52.743 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:52.743 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:52.743 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:52.743 18:40:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:52.743 18:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:52.743 18:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.000 18:40:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:53.000 18:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:53.000 18:40:08 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.000 18:40:08 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:26:53.000 18:40:08 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:26:53.256 18:40:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:53.256 18:40:09 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:53.256 18:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:53.256 18:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:53.256 18:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.514 18:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:53.514 18:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:53.514 18:40:09 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:53.514 18:40:09 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:53.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:53.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:53.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:53.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:53.514 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:53.514 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:53.514 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:53.514 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:53.514 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:53.514 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:53.514 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:53.514 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:53.514 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:53.514 ' 00:26:58.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:58.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:58.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:58.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:58.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:58.782 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:58.782 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:58.782 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:58.782 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:58.782 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:58.782 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:58.782 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:58.782 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:58.782 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:58.782 18:40:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:58.782 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:58.782 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:58.782 18:40:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 99110 00:26:58.782 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 99110 ']' 00:26:58.782 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 99110 00:26:58.782 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:26:58.783 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:58.783 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99110 00:26:58.783 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:58.783 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:58.783 killing process with pid 99110 00:26:58.783 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99110' 00:26:58.783 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 99110 00:26:58.783 [2024-05-13 18:40:14.600528] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:58.783 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 99110 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 99110 ']' 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 99110 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 99110 ']' 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 99110 00:26:59.042 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (99110) - No such process 00:26:59.042 Process with pid 99110 is not found 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 99110 is not found' 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:59.042 00:26:59.042 real 0m17.437s 00:26:59.042 user 0m37.547s 00:26:59.042 sys 0m0.942s 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:59.042 18:40:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:59.042 ************************************ 00:26:59.042 END TEST spdkcli_nvmf_tcp 00:26:59.042 ************************************ 00:26:59.042 18:40:14 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:59.042 18:40:14 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:26:59.042 18:40:14 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:59.042 18:40:14 -- common/autotest_common.sh@10 -- # set +x 00:26:59.042 ************************************ 00:26:59.042 START TEST nvmf_identify_passthru 00:26:59.042 ************************************ 00:26:59.042 18:40:14 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:59.042 * Looking for test storage... 00:26:59.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:59.300 18:40:14 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.300 18:40:14 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.300 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:26:59.300 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:26:59.300 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.300 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.300 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:59.300 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:59.301 18:40:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:59.301 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:59.301 18:40:15 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:26:59.301 18:40:15 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.301 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.301 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:59.301 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@432 -- # nvmf_veth_init 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:59.301 Cannot find device "nvmf_tgt_br" 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@155 -- # true 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:59.301 Cannot find device "nvmf_tgt_br2" 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@156 -- # true 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:59.301 Cannot find device "nvmf_tgt_br" 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@158 -- # true 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:59.301 Cannot find device "nvmf_tgt_br2" 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@159 -- # true 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:59.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:26:59.301 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:59.302 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:59.302 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:59.560 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:59.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:59.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:26:59.561 00:26:59.561 --- 10.0.0.2 ping statistics --- 00:26:59.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.561 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:59.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:59.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:26:59.561 00:26:59.561 --- 10.0.0.3 ping statistics --- 00:26:59.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.561 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:59.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:59.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:26:59.561 00:26:59.561 --- 10.0.0.1 ping statistics --- 00:26:59.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:59.561 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@433 -- # return 0 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:59.561 18:40:15 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:59.561 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:59.561 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 2 == 0 )) 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:26:59.561 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:26:59.561 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:26:59.561 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:26:59.561 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:26:59.561 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:59.561 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:59.821 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:26:59.821 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:26:59.821 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:59.821 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:00.080 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:27:00.080 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:00.080 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:00.080 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=99597 00:27:00.080 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:00.080 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:00.080 18:40:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 99597 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 99597 ']' 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:00.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:00.080 18:40:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:00.080 [2024-05-13 18:40:15.905090] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:00.080 [2024-05-13 18:40:15.905178] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.339 [2024-05-13 18:40:16.045369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:00.339 [2024-05-13 18:40:16.178502] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.339 [2024-05-13 18:40:16.178908] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.339 [2024-05-13 18:40:16.179066] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.339 [2024-05-13 18:40:16.179226] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.339 [2024-05-13 18:40:16.179364] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.339 [2024-05-13 18:40:16.179617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.339 [2024-05-13 18:40:16.179653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:00.339 [2024-05-13 18:40:16.179724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.339 [2024-05-13 18:40:16.179725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.275 18:40:16 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:01.275 18:40:16 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:27:01.275 18:40:16 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:01.275 18:40:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 18:40:16 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.275 18:40:16 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:01.275 18:40:16 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 [2024-05-13 18:40:17.060440] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.275 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 [2024-05-13 18:40:17.074808] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.275 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 Nvme0n1 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.275 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.275 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.275 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.275 [2024-05-13 18:40:17.207882] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:01.275 [2024-05-13 18:40:17.208335] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.275 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.275 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.632 [ 00:27:01.632 { 00:27:01.632 "allow_any_host": true, 00:27:01.632 "hosts": [], 00:27:01.632 "listen_addresses": [], 00:27:01.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:01.632 "subtype": "Discovery" 00:27:01.632 }, 00:27:01.632 { 00:27:01.632 "allow_any_host": true, 00:27:01.632 "hosts": [], 00:27:01.632 "listen_addresses": [ 00:27:01.632 { 00:27:01.632 "adrfam": "IPv4", 00:27:01.632 "traddr": "10.0.0.2", 00:27:01.632 "trsvcid": "4420", 00:27:01.632 "trtype": "TCP" 00:27:01.632 } 00:27:01.632 ], 00:27:01.632 "max_cntlid": 65519, 00:27:01.632 "max_namespaces": 1, 00:27:01.632 "min_cntlid": 1, 00:27:01.632 "model_number": "SPDK bdev Controller", 00:27:01.632 "namespaces": [ 00:27:01.632 { 00:27:01.632 "bdev_name": "Nvme0n1", 00:27:01.632 "name": "Nvme0n1", 00:27:01.632 "nguid": "3E0E1CD2BBB64AD185B61FD55A686A92", 00:27:01.632 "nsid": 1, 00:27:01.632 "uuid": "3e0e1cd2-bbb6-4ad1-85b6-1fd55a686a92" 00:27:01.632 } 00:27:01.632 ], 00:27:01.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:01.632 "serial_number": "SPDK00000000000001", 00:27:01.632 "subtype": "NVMe" 00:27:01.632 } 00:27:01.632 ] 00:27:01.632 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.632 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:01.632 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:01.632 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:01.632 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:27:01.632 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:01.632 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:01.632 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:01.891 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:27:01.891 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:27:01.891 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:27:01.891 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:01.891 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:01.891 18:40:17 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:01.891 rmmod nvme_tcp 00:27:01.891 rmmod nvme_fabrics 00:27:01.891 rmmod nvme_keyring 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 99597 ']' 00:27:01.891 18:40:17 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 99597 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 99597 ']' 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 99597 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99597 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:27:01.891 killing process with pid 99597 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99597' 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 99597 00:27:01.891 [2024-05-13 18:40:17.787307] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:01.891 18:40:17 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 99597 00:27:02.149 18:40:18 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:02.149 18:40:18 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:02.149 18:40:18 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:02.149 18:40:18 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.149 18:40:18 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:02.149 18:40:18 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.149 18:40:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:02.149 18:40:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.149 18:40:18 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:02.149 00:27:02.149 real 0m3.171s 00:27:02.149 user 0m7.845s 00:27:02.149 sys 0m0.830s 00:27:02.149 18:40:18 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:02.149 18:40:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:02.149 ************************************ 00:27:02.149 END TEST nvmf_identify_passthru 00:27:02.149 ************************************ 00:27:02.408 18:40:18 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:02.408 18:40:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:02.408 18:40:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:02.408 18:40:18 -- common/autotest_common.sh@10 -- # set +x 00:27:02.408 ************************************ 00:27:02.408 START TEST nvmf_dif 00:27:02.408 ************************************ 00:27:02.408 18:40:18 nvmf_dif -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:02.408 * Looking for test storage... 00:27:02.408 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:02.408 18:40:18 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:02.408 18:40:18 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.408 18:40:18 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.408 18:40:18 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.408 18:40:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.408 18:40:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.408 18:40:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.408 18:40:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:02.408 18:40:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:02.408 18:40:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:02.408 18:40:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:02.408 18:40:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:02.408 18:40:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:02.408 18:40:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.408 18:40:18 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.409 18:40:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:02.409 18:40:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@432 -- # nvmf_veth_init 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:02.409 Cannot find device "nvmf_tgt_br" 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@155 -- # true 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:02.409 Cannot find device "nvmf_tgt_br2" 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@156 -- # true 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:02.409 Cannot find device "nvmf_tgt_br" 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@158 -- # true 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:02.409 Cannot find device "nvmf_tgt_br2" 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@159 -- # true 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:02.409 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:02.409 18:40:18 nvmf_dif -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:02.667 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:02.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:02.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:27:02.668 00:27:02.668 --- 10.0.0.2 ping statistics --- 00:27:02.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.668 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:02.668 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:02.668 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:27:02.668 00:27:02.668 --- 10.0.0.3 ping statistics --- 00:27:02.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.668 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:02.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:02.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:27:02.668 00:27:02.668 --- 10.0.0.1 ping statistics --- 00:27:02.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:02.668 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@433 -- # return 0 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:02.668 18:40:18 nvmf_dif -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:03.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:03.235 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:03.235 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:03.235 18:40:18 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:03.235 18:40:18 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.235 18:40:18 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:27:03.235 18:40:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=99938 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:03.235 18:40:18 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 99938 00:27:03.235 18:40:18 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 99938 ']' 00:27:03.236 18:40:18 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.236 18:40:18 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:27:03.236 18:40:18 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.236 18:40:18 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:27:03.236 18:40:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:03.236 [2024-05-13 18:40:19.022453] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:27:03.236 [2024-05-13 18:40:19.022552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.236 [2024-05-13 18:40:19.162177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.495 [2024-05-13 18:40:19.284235] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.495 [2024-05-13 18:40:19.284293] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.495 [2024-05-13 18:40:19.284308] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.495 [2024-05-13 18:40:19.284319] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.495 [2024-05-13 18:40:19.284327] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.495 [2024-05-13 18:40:19.284356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.064 18:40:19 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:27:04.064 18:40:19 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:27:04.064 18:40:19 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.064 18:40:19 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:04.064 18:40:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:04.323 18:40:20 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.323 18:40:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:04.323 18:40:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:04.323 18:40:20 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.323 18:40:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:04.323 [2024-05-13 18:40:20.029419] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.323 18:40:20 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.323 18:40:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:04.323 18:40:20 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:04.323 18:40:20 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:04.323 18:40:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:04.323 ************************************ 00:27:04.323 START TEST fio_dif_1_default 00:27:04.324 ************************************ 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:04.324 bdev_null0 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:04.324 [2024-05-13 18:40:20.077333] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:27:04.324 [2024-05-13 18:40:20.077544] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:04.324 { 00:27:04.324 "params": { 00:27:04.324 "name": "Nvme$subsystem", 00:27:04.324 "trtype": "$TEST_TRANSPORT", 00:27:04.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:04.324 "adrfam": "ipv4", 00:27:04.324 "trsvcid": "$NVMF_PORT", 00:27:04.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:04.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:04.324 "hdgst": ${hdgst:-false}, 00:27:04.324 "ddgst": ${ddgst:-false} 00:27:04.324 }, 00:27:04.324 "method": "bdev_nvme_attach_controller" 00:27:04.324 } 00:27:04.324 EOF 00:27:04.324 )") 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:04.324 "params": { 00:27:04.324 "name": "Nvme0", 00:27:04.324 "trtype": "tcp", 00:27:04.324 "traddr": "10.0.0.2", 00:27:04.324 "adrfam": "ipv4", 00:27:04.324 "trsvcid": "4420", 00:27:04.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:04.324 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:04.324 "hdgst": false, 00:27:04.324 "ddgst": false 00:27:04.324 }, 00:27:04.324 "method": "bdev_nvme_attach_controller" 00:27:04.324 }' 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:04.324 18:40:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:04.582 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:04.582 fio-3.35 00:27:04.582 Starting 1 thread 00:27:16.892 00:27:16.892 filename0: (groupid=0, jobs=1): err= 0: pid=100021: Mon May 13 18:40:30 2024 00:27:16.892 read: IOPS=1668, BW=6673KiB/s (6833kB/s)(65.4MiB/10030msec) 00:27:16.892 slat (nsec): min=6573, max=77976, avg=8615.87, stdev=2921.88 00:27:16.892 clat (usec): min=426, max=42549, avg=2372.03, stdev=8566.10 00:27:16.892 lat (usec): min=434, max=42559, avg=2380.64, stdev=8566.16 00:27:16.892 clat percentiles (usec): 00:27:16.892 | 1.00th=[ 441], 5.00th=[ 449], 10.00th=[ 453], 20.00th=[ 457], 00:27:16.892 | 30.00th=[ 465], 40.00th=[ 469], 50.00th=[ 474], 60.00th=[ 478], 00:27:16.892 | 70.00th=[ 486], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 635], 00:27:16.892 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:27:16.892 | 99.99th=[42730] 00:27:16.892 bw ( KiB/s): min= 3456, max=15968, per=100.00%, avg=6691.20, stdev=3071.88, samples=20 00:27:16.892 iops : min= 864, max= 3992, avg=1672.80, stdev=767.97, samples=20 00:27:16.892 lat (usec) : 500=84.95%, 750=10.35%, 1000=0.02% 00:27:16.892 lat (msec) : 2=0.01%, 10=0.02%, 50=4.66% 00:27:16.892 cpu : usr=90.40%, sys=8.63%, ctx=20, majf=0, minf=9 00:27:16.892 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:16.892 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.892 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:16.892 issued rwts: total=16732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:16.892 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:16.892 00:27:16.892 Run status group 0 (all jobs): 00:27:16.892 READ: bw=6673KiB/s (6833kB/s), 6673KiB/s-6673KiB/s (6833kB/s-6833kB/s), io=65.4MiB (68.5MB), run=10030-10030msec 00:27:16.892 18:40:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:16.892 18:40:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:16.892 18:40:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:16.892 18:40:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:16.892 18:40:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:16.892 18:40:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:16.892 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 00:27:16.893 real 0m11.078s 00:27:16.893 user 0m9.756s 00:27:16.893 sys 0m1.138s 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 ************************************ 00:27:16.893 END TEST fio_dif_1_default 00:27:16.893 ************************************ 00:27:16.893 18:40:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:16.893 18:40:31 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:16.893 18:40:31 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 ************************************ 00:27:16.893 START TEST fio_dif_1_multi_subsystems 00:27:16.893 ************************************ 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 bdev_null0 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 [2024-05-13 18:40:31.199750] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 bdev_null1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.893 { 00:27:16.893 "params": { 00:27:16.893 "name": "Nvme$subsystem", 00:27:16.893 "trtype": "$TEST_TRANSPORT", 00:27:16.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.893 "adrfam": "ipv4", 00:27:16.893 "trsvcid": "$NVMF_PORT", 00:27:16.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.893 "hdgst": ${hdgst:-false}, 00:27:16.893 "ddgst": ${ddgst:-false} 00:27:16.893 }, 00:27:16.893 "method": "bdev_nvme_attach_controller" 00:27:16.893 } 00:27:16.893 EOF 00:27:16.893 )") 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:16.893 { 00:27:16.893 "params": { 00:27:16.893 "name": "Nvme$subsystem", 00:27:16.893 "trtype": "$TEST_TRANSPORT", 00:27:16.893 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:16.893 "adrfam": "ipv4", 00:27:16.893 "trsvcid": "$NVMF_PORT", 00:27:16.893 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:16.893 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:16.893 "hdgst": ${hdgst:-false}, 00:27:16.893 "ddgst": ${ddgst:-false} 00:27:16.893 }, 00:27:16.893 "method": "bdev_nvme_attach_controller" 00:27:16.893 } 00:27:16.893 EOF 00:27:16.893 )") 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:27:16.893 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:16.893 "params": { 00:27:16.893 "name": "Nvme0", 00:27:16.893 "trtype": "tcp", 00:27:16.893 "traddr": "10.0.0.2", 00:27:16.893 "adrfam": "ipv4", 00:27:16.893 "trsvcid": "4420", 00:27:16.893 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:16.893 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:16.893 "hdgst": false, 00:27:16.893 "ddgst": false 00:27:16.893 }, 00:27:16.893 "method": "bdev_nvme_attach_controller" 00:27:16.893 },{ 00:27:16.893 "params": { 00:27:16.893 "name": "Nvme1", 00:27:16.893 "trtype": "tcp", 00:27:16.893 "traddr": "10.0.0.2", 00:27:16.893 "adrfam": "ipv4", 00:27:16.893 "trsvcid": "4420", 00:27:16.893 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:16.893 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:16.893 "hdgst": false, 00:27:16.893 "ddgst": false 00:27:16.893 }, 00:27:16.894 "method": "bdev_nvme_attach_controller" 00:27:16.894 }' 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:16.894 18:40:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:16.894 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:16.894 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:16.894 fio-3.35 00:27:16.894 Starting 2 threads 00:27:26.990 00:27:26.990 filename0: (groupid=0, jobs=1): err= 0: pid=100176: Mon May 13 18:40:42 2024 00:27:26.990 read: IOPS=193, BW=772KiB/s (791kB/s)(7744KiB/10029msec) 00:27:26.990 slat (nsec): min=5740, max=54727, avg=10910.75, stdev=6073.38 00:27:26.990 clat (usec): min=447, max=42502, avg=20684.44, stdev=20316.93 00:27:26.990 lat (usec): min=455, max=42512, avg=20695.36, stdev=20316.72 00:27:26.990 clat percentiles (usec): 00:27:26.990 | 1.00th=[ 457], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 490], 00:27:26.990 | 30.00th=[ 506], 40.00th=[ 545], 50.00th=[ 1074], 60.00th=[41157], 00:27:26.990 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:27:26.990 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:26.990 | 99.99th=[42730] 00:27:26.990 bw ( KiB/s): min= 544, max= 992, per=48.06%, avg=772.70, stdev=118.87, samples=20 00:27:26.990 iops : min= 136, max= 248, avg=193.15, stdev=29.70, samples=20 00:27:26.990 lat (usec) : 500=27.53%, 750=18.34%, 1000=3.31% 00:27:26.990 lat (msec) : 2=1.24%, 50=49.59% 00:27:26.990 cpu : usr=94.80%, sys=4.38%, ctx=109, majf=0, minf=0 00:27:26.990 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:26.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.990 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.990 issued rwts: total=1936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.990 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:26.990 filename1: (groupid=0, jobs=1): err= 0: pid=100177: Mon May 13 18:40:42 2024 00:27:26.990 read: IOPS=208, BW=834KiB/s (854kB/s)(8368KiB/10031msec) 00:27:26.990 slat (nsec): min=7444, max=53844, avg=10056.92, stdev=4918.82 00:27:26.990 clat (usec): min=439, max=42813, avg=19146.75, stdev=20219.51 00:27:26.990 lat (usec): min=447, max=42831, avg=19156.81, stdev=20219.60 00:27:26.990 clat percentiles (usec): 00:27:26.990 | 1.00th=[ 453], 5.00th=[ 461], 10.00th=[ 469], 20.00th=[ 482], 00:27:26.990 | 30.00th=[ 494], 40.00th=[ 515], 50.00th=[ 799], 60.00th=[40633], 00:27:26.990 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:27:26.990 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:27:26.990 | 99.99th=[42730] 00:27:26.990 bw ( KiB/s): min= 544, max= 1184, per=51.99%, avg=835.20, stdev=161.81, samples=20 00:27:26.990 iops : min= 136, max= 296, avg=208.80, stdev=40.45, samples=20 00:27:26.990 lat (usec) : 500=33.99%, 750=15.15%, 1000=3.59% 00:27:26.990 lat (msec) : 2=1.39%, 50=45.89% 00:27:26.990 cpu : usr=94.77%, sys=4.78%, ctx=76, majf=0, minf=0 00:27:26.990 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:26.990 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.991 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:26.991 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:26.991 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:26.991 00:27:26.991 Run status group 0 (all jobs): 00:27:26.991 READ: bw=1606KiB/s (1645kB/s), 772KiB/s-834KiB/s (791kB/s-854kB/s), io=15.7MiB (16.5MB), run=10029-10031msec 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 ************************************ 00:27:26.991 END TEST fio_dif_1_multi_subsystems 00:27:26.991 ************************************ 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 00:27:26.991 real 0m11.246s 00:27:26.991 user 0m19.849s 00:27:26.991 sys 0m1.181s 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 18:40:42 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:26.991 18:40:42 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:26.991 18:40:42 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 ************************************ 00:27:26.991 START TEST fio_dif_rand_params 00:27:26.991 ************************************ 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 bdev_null0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:26.991 [2024-05-13 18:40:42.503222] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:26.991 { 00:27:26.991 "params": { 00:27:26.991 "name": "Nvme$subsystem", 00:27:26.991 "trtype": "$TEST_TRANSPORT", 00:27:26.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:26.991 "adrfam": "ipv4", 00:27:26.991 "trsvcid": "$NVMF_PORT", 00:27:26.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:26.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:26.991 "hdgst": ${hdgst:-false}, 00:27:26.991 "ddgst": ${ddgst:-false} 00:27:26.991 }, 00:27:26.991 "method": "bdev_nvme_attach_controller" 00:27:26.991 } 00:27:26.991 EOF 00:27:26.991 )") 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:26.991 "params": { 00:27:26.991 "name": "Nvme0", 00:27:26.991 "trtype": "tcp", 00:27:26.991 "traddr": "10.0.0.2", 00:27:26.991 "adrfam": "ipv4", 00:27:26.991 "trsvcid": "4420", 00:27:26.991 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:26.991 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:26.991 "hdgst": false, 00:27:26.991 "ddgst": false 00:27:26.991 }, 00:27:26.991 "method": "bdev_nvme_attach_controller" 00:27:26.991 }' 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:26.991 18:40:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:26.991 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:26.991 ... 00:27:26.991 fio-3.35 00:27:26.991 Starting 3 threads 00:27:33.550 00:27:33.550 filename0: (groupid=0, jobs=1): err= 0: pid=100328: Mon May 13 18:40:48 2024 00:27:33.550 read: IOPS=232, BW=29.0MiB/s (30.5MB/s)(145MiB/5005msec) 00:27:33.550 slat (nsec): min=7470, max=52717, avg=11833.69, stdev=4440.12 00:27:33.550 clat (usec): min=5268, max=55306, avg=12889.85, stdev=6661.23 00:27:33.550 lat (usec): min=5279, max=55329, avg=12901.68, stdev=6661.63 00:27:33.550 clat percentiles (usec): 00:27:33.550 | 1.00th=[ 6652], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11207], 00:27:33.550 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:27:33.550 | 70.00th=[12387], 80.00th=[12780], 90.00th=[13173], 95.00th=[13829], 00:27:33.550 | 99.00th=[53216], 99.50th=[54264], 99.90th=[54789], 99.95th=[55313], 00:27:33.550 | 99.99th=[55313] 00:27:33.550 bw ( KiB/s): min=24576, max=31488, per=31.98%, avg=29702.10, stdev=1979.21, samples=10 00:27:33.550 iops : min= 192, max= 246, avg=232.00, stdev=15.43, samples=10 00:27:33.550 lat (msec) : 10=5.76%, 20=91.66%, 100=2.58% 00:27:33.550 cpu : usr=92.77%, sys=5.86%, ctx=38, majf=0, minf=0 00:27:33.550 IO depths : 1=9.3%, 2=90.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:33.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.550 issued rwts: total=1163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:33.550 filename0: (groupid=0, jobs=1): err= 0: pid=100329: Mon May 13 18:40:48 2024 00:27:33.550 read: IOPS=278, BW=34.8MiB/s (36.5MB/s)(174MiB/5006msec) 00:27:33.550 slat (nsec): min=5666, max=39452, avg=11688.05, stdev=3364.47 00:27:33.550 clat (usec): min=5873, max=52618, avg=10745.54, stdev=3969.09 00:27:33.550 lat (usec): min=5884, max=52629, avg=10757.23, stdev=3969.13 00:27:33.550 clat percentiles (usec): 00:27:33.550 | 1.00th=[ 6521], 5.00th=[ 7504], 10.00th=[ 8160], 20.00th=[ 9634], 00:27:33.550 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10683], 60.00th=[10945], 00:27:33.550 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:27:33.550 | 99.00th=[14353], 99.50th=[50594], 99.90th=[52167], 99.95th=[52691], 00:27:33.550 | 99.99th=[52691] 00:27:33.550 bw ( KiB/s): min=32191, max=39680, per=38.39%, avg=35654.30, stdev=2278.44, samples=10 00:27:33.550 iops : min= 251, max= 310, avg=278.50, stdev=17.88, samples=10 00:27:33.550 lat (msec) : 10=26.52%, 20=72.62%, 50=0.14%, 100=0.72% 00:27:33.550 cpu : usr=92.35%, sys=6.09%, ctx=64, majf=0, minf=0 00:27:33.550 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:33.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.550 issued rwts: total=1395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:33.550 filename0: (groupid=0, jobs=1): err= 0: pid=100330: Mon May 13 18:40:48 2024 00:27:33.550 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(134MiB/5002msec) 00:27:33.550 slat (nsec): min=7385, max=34191, avg=10635.49, stdev=4243.60 00:27:33.550 clat (usec): min=4228, max=17049, avg=13944.36, stdev=2267.96 00:27:33.550 lat (usec): min=4236, max=17061, avg=13954.99, stdev=2267.91 00:27:33.550 clat percentiles (usec): 00:27:33.550 | 1.00th=[ 4293], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[13435], 00:27:33.550 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:27:33.550 | 70.00th=[15139], 80.00th=[15533], 90.00th=[15926], 95.00th=[16319], 00:27:33.550 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:27:33.550 | 99.99th=[17171] 00:27:33.550 bw ( KiB/s): min=26112, max=29184, per=29.40%, avg=27306.67, stdev=1159.09, samples=9 00:27:33.550 iops : min= 204, max= 228, avg=213.33, stdev= 9.06, samples=9 00:27:33.550 lat (msec) : 10=10.89%, 20=89.11% 00:27:33.550 cpu : usr=92.40%, sys=6.24%, ctx=29, majf=0, minf=0 00:27:33.550 IO depths : 1=31.2%, 2=68.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:33.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.550 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:33.550 issued rwts: total=1074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:33.550 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:33.550 00:27:33.550 Run status group 0 (all jobs): 00:27:33.550 READ: bw=90.7MiB/s (95.1MB/s), 26.8MiB/s-34.8MiB/s (28.1MB/s-36.5MB/s), io=454MiB (476MB), run=5002-5006msec 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 bdev_null0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 [2024-05-13 18:40:48.614285] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 bdev_null1 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:33.550 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.551 bdev_null2 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.551 { 00:27:33.551 "params": { 00:27:33.551 "name": "Nvme$subsystem", 00:27:33.551 "trtype": "$TEST_TRANSPORT", 00:27:33.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.551 "adrfam": "ipv4", 00:27:33.551 "trsvcid": "$NVMF_PORT", 00:27:33.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.551 "hdgst": ${hdgst:-false}, 00:27:33.551 "ddgst": ${ddgst:-false} 00:27:33.551 }, 00:27:33.551 "method": "bdev_nvme_attach_controller" 00:27:33.551 } 00:27:33.551 EOF 00:27:33.551 )") 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.551 { 00:27:33.551 "params": { 00:27:33.551 "name": "Nvme$subsystem", 00:27:33.551 "trtype": "$TEST_TRANSPORT", 00:27:33.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.551 "adrfam": "ipv4", 00:27:33.551 "trsvcid": "$NVMF_PORT", 00:27:33.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.551 "hdgst": ${hdgst:-false}, 00:27:33.551 "ddgst": ${ddgst:-false} 00:27:33.551 }, 00:27:33.551 "method": "bdev_nvme_attach_controller" 00:27:33.551 } 00:27:33.551 EOF 00:27:33.551 )") 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:33.551 { 00:27:33.551 "params": { 00:27:33.551 "name": "Nvme$subsystem", 00:27:33.551 "trtype": "$TEST_TRANSPORT", 00:27:33.551 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:33.551 "adrfam": "ipv4", 00:27:33.551 "trsvcid": "$NVMF_PORT", 00:27:33.551 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:33.551 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:33.551 "hdgst": ${hdgst:-false}, 00:27:33.551 "ddgst": ${ddgst:-false} 00:27:33.551 }, 00:27:33.551 "method": "bdev_nvme_attach_controller" 00:27:33.551 } 00:27:33.551 EOF 00:27:33.551 )") 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:33.551 "params": { 00:27:33.551 "name": "Nvme0", 00:27:33.551 "trtype": "tcp", 00:27:33.551 "traddr": "10.0.0.2", 00:27:33.551 "adrfam": "ipv4", 00:27:33.551 "trsvcid": "4420", 00:27:33.551 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.551 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.551 "hdgst": false, 00:27:33.551 "ddgst": false 00:27:33.551 }, 00:27:33.551 "method": "bdev_nvme_attach_controller" 00:27:33.551 },{ 00:27:33.551 "params": { 00:27:33.551 "name": "Nvme1", 00:27:33.551 "trtype": "tcp", 00:27:33.551 "traddr": "10.0.0.2", 00:27:33.551 "adrfam": "ipv4", 00:27:33.551 "trsvcid": "4420", 00:27:33.551 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:33.551 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:33.551 "hdgst": false, 00:27:33.551 "ddgst": false 00:27:33.551 }, 00:27:33.551 "method": "bdev_nvme_attach_controller" 00:27:33.551 },{ 00:27:33.551 "params": { 00:27:33.551 "name": "Nvme2", 00:27:33.551 "trtype": "tcp", 00:27:33.551 "traddr": "10.0.0.2", 00:27:33.551 "adrfam": "ipv4", 00:27:33.551 "trsvcid": "4420", 00:27:33.551 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:33.551 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:33.551 "hdgst": false, 00:27:33.551 "ddgst": false 00:27:33.551 }, 00:27:33.551 "method": "bdev_nvme_attach_controller" 00:27:33.551 }' 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:33.551 18:40:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:33.551 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:33.551 ... 00:27:33.551 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:33.551 ... 00:27:33.551 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:33.551 ... 00:27:33.551 fio-3.35 00:27:33.551 Starting 24 threads 00:27:45.768 00:27:45.768 filename0: (groupid=0, jobs=1): err= 0: pid=100431: Mon May 13 18:40:59 2024 00:27:45.768 read: IOPS=199, BW=799KiB/s (818kB/s)(7996KiB/10010msec) 00:27:45.768 slat (nsec): min=4843, max=49832, avg=11956.02, stdev=5246.15 00:27:45.768 clat (msec): min=12, max=200, avg=80.02, stdev=28.15 00:27:45.768 lat (msec): min=12, max=200, avg=80.03, stdev=28.15 00:27:45.768 clat percentiles (msec): 00:27:45.768 | 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 48], 20.00th=[ 58], 00:27:45.768 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 82], 00:27:45.768 | 70.00th=[ 88], 80.00th=[ 100], 90.00th=[ 120], 95.00th=[ 134], 00:27:45.768 | 99.00th=[ 161], 99.50th=[ 184], 99.90th=[ 201], 99.95th=[ 201], 00:27:45.768 | 99.99th=[ 201] 00:27:45.769 bw ( KiB/s): min= 384, max= 1152, per=4.01%, avg=795.60, stdev=192.36, samples=20 00:27:45.769 iops : min= 96, max= 288, avg=198.90, stdev=48.09, samples=20 00:27:45.769 lat (msec) : 20=0.50%, 50=11.11%, 100=69.08%, 250=19.31% 00:27:45.769 cpu : usr=39.42%, sys=0.86%, ctx=1126, majf=0, minf=9 00:27:45.769 IO depths : 1=2.6%, 2=5.5%, 4=14.3%, 8=67.2%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=1999,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename0: (groupid=0, jobs=1): err= 0: pid=100432: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=191, BW=765KiB/s (784kB/s)(7668KiB/10017msec) 00:27:45.769 slat (usec): min=4, max=11027, avg=22.18, stdev=283.00 00:27:45.769 clat (msec): min=24, max=164, avg=83.41, stdev=22.65 00:27:45.769 lat (msec): min=24, max=164, avg=83.43, stdev=22.64 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 69], 00:27:45.769 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 86], 00:27:45.769 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 115], 95.00th=[ 123], 00:27:45.769 | 99.00th=[ 150], 99.50th=[ 159], 99.90th=[ 165], 99.95th=[ 165], 00:27:45.769 | 99.99th=[ 165] 00:27:45.769 bw ( KiB/s): min= 608, max= 1024, per=3.85%, avg=763.45, stdev=119.06, samples=20 00:27:45.769 iops : min= 152, max= 256, avg=190.85, stdev=29.77, samples=20 00:27:45.769 lat (msec) : 50=6.99%, 100=70.74%, 250=22.27% 00:27:45.769 cpu : usr=39.94%, sys=0.95%, ctx=1086, majf=0, minf=9 00:27:45.769 IO depths : 1=1.9%, 2=4.5%, 4=13.2%, 8=68.9%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=91.2%, 8=4.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename0: (groupid=0, jobs=1): err= 0: pid=100433: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=255, BW=1022KiB/s (1047kB/s)(10.0MiB/10047msec) 00:27:45.769 slat (usec): min=7, max=4024, avg=15.18, stdev=128.12 00:27:45.769 clat (msec): min=2, max=151, avg=62.47, stdev=21.43 00:27:45.769 lat (msec): min=2, max=151, avg=62.49, stdev=21.43 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 6], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 48], 00:27:45.769 | 30.00th=[ 51], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 66], 00:27:45.769 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 89], 95.00th=[ 101], 00:27:45.769 | 99.00th=[ 128], 99.50th=[ 138], 99.90th=[ 153], 99.95th=[ 153], 00:27:45.769 | 99.99th=[ 153] 00:27:45.769 bw ( KiB/s): min= 640, max= 1712, per=5.14%, avg=1020.05, stdev=224.27, samples=20 00:27:45.769 iops : min= 160, max= 428, avg=254.95, stdev=56.09, samples=20 00:27:45.769 lat (msec) : 4=0.62%, 10=1.87%, 20=0.62%, 50=25.93%, 100=66.47% 00:27:45.769 lat (msec) : 250=4.48% 00:27:45.769 cpu : usr=43.45%, sys=0.93%, ctx=1275, majf=0, minf=9 00:27:45.769 IO depths : 1=0.6%, 2=1.2%, 4=6.8%, 8=78.3%, 16=13.0%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=89.2%, 8=6.4%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=2568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename0: (groupid=0, jobs=1): err= 0: pid=100434: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=175, BW=704KiB/s (721kB/s)(7048KiB/10014msec) 00:27:45.769 slat (usec): min=4, max=8039, avg=20.97, stdev=270.22 00:27:45.769 clat (msec): min=39, max=194, avg=90.71, stdev=26.18 00:27:45.769 lat (msec): min=39, max=194, avg=90.73, stdev=26.18 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 45], 5.00th=[ 58], 10.00th=[ 61], 20.00th=[ 72], 00:27:45.769 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 85], 60.00th=[ 95], 00:27:45.769 | 70.00th=[ 99], 80.00th=[ 109], 90.00th=[ 123], 95.00th=[ 134], 00:27:45.769 | 99.00th=[ 171], 99.50th=[ 194], 99.90th=[ 194], 99.95th=[ 194], 00:27:45.769 | 99.99th=[ 194] 00:27:45.769 bw ( KiB/s): min= 424, max= 896, per=3.52%, avg=698.05, stdev=134.09, samples=20 00:27:45.769 iops : min= 106, max= 224, avg=174.50, stdev=33.52, samples=20 00:27:45.769 lat (msec) : 50=3.01%, 100=68.56%, 250=28.43% 00:27:45.769 cpu : usr=32.32%, sys=0.79%, ctx=878, majf=0, minf=9 00:27:45.769 IO depths : 1=2.8%, 2=6.1%, 4=16.5%, 8=64.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=1762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename0: (groupid=0, jobs=1): err= 0: pid=100435: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=245, BW=980KiB/s (1004kB/s)(9832KiB/10028msec) 00:27:45.769 slat (usec): min=5, max=4188, avg=12.16, stdev=84.37 00:27:45.769 clat (msec): min=30, max=150, avg=65.19, stdev=21.65 00:27:45.769 lat (msec): min=30, max=150, avg=65.20, stdev=21.65 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 33], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 48], 00:27:45.769 | 30.00th=[ 52], 40.00th=[ 55], 50.00th=[ 59], 60.00th=[ 65], 00:27:45.769 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 96], 95.00th=[ 107], 00:27:45.769 | 99.00th=[ 132], 99.50th=[ 138], 99.90th=[ 150], 99.95th=[ 150], 00:27:45.769 | 99.99th=[ 150] 00:27:45.769 bw ( KiB/s): min= 632, max= 1248, per=4.92%, avg=976.80, stdev=184.61, samples=20 00:27:45.769 iops : min= 158, max= 312, avg=244.20, stdev=46.15, samples=20 00:27:45.769 lat (msec) : 50=27.79%, 100=64.16%, 250=8.06% 00:27:45.769 cpu : usr=39.68%, sys=1.07%, ctx=1188, majf=0, minf=9 00:27:45.769 IO depths : 1=0.2%, 2=0.4%, 4=5.3%, 8=80.2%, 16=13.8%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=88.9%, 8=7.1%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=2458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename0: (groupid=0, jobs=1): err= 0: pid=100436: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=210, BW=842KiB/s (863kB/s)(8460KiB/10043msec) 00:27:45.769 slat (usec): min=5, max=8055, avg=20.68, stdev=261.91 00:27:45.769 clat (msec): min=18, max=191, avg=75.78, stdev=26.46 00:27:45.769 lat (msec): min=18, max=191, avg=75.80, stdev=26.47 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 52], 00:27:45.769 | 30.00th=[ 59], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 79], 00:27:45.769 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 124], 00:27:45.769 | 99.00th=[ 150], 99.50th=[ 165], 99.90th=[ 192], 99.95th=[ 192], 00:27:45.769 | 99.99th=[ 192] 00:27:45.769 bw ( KiB/s): min= 520, max= 1168, per=4.23%, avg=839.40, stdev=190.83, samples=20 00:27:45.769 iops : min= 130, max= 292, avg=209.80, stdev=47.71, samples=20 00:27:45.769 lat (msec) : 20=0.76%, 50=17.54%, 100=66.10%, 250=15.60% 00:27:45.769 cpu : usr=37.88%, sys=1.06%, ctx=1099, majf=0, minf=9 00:27:45.769 IO depths : 1=0.7%, 2=1.4%, 4=8.6%, 8=76.5%, 16=12.7%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=89.2%, 8=6.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=2115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename0: (groupid=0, jobs=1): err= 0: pid=100437: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=184, BW=738KiB/s (755kB/s)(7392KiB/10020msec) 00:27:45.769 slat (usec): min=4, max=4032, avg=25.53, stdev=228.55 00:27:45.769 clat (msec): min=31, max=251, avg=86.48, stdev=28.69 00:27:45.769 lat (msec): min=31, max=251, avg=86.50, stdev=28.70 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 41], 5.00th=[ 51], 10.00th=[ 56], 20.00th=[ 69], 00:27:45.769 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 78], 60.00th=[ 84], 00:27:45.769 | 70.00th=[ 95], 80.00th=[ 106], 90.00th=[ 121], 95.00th=[ 150], 00:27:45.769 | 99.00th=[ 178], 99.50th=[ 203], 99.90th=[ 253], 99.95th=[ 253], 00:27:45.769 | 99.99th=[ 253] 00:27:45.769 bw ( KiB/s): min= 344, max= 1048, per=3.69%, avg=732.80, stdev=156.00, samples=20 00:27:45.769 iops : min= 86, max= 262, avg=183.20, stdev=39.00, samples=20 00:27:45.769 lat (msec) : 50=4.65%, 100=71.92%, 250=23.16%, 500=0.27% 00:27:45.769 cpu : usr=43.42%, sys=1.15%, ctx=1296, majf=0, minf=9 00:27:45.769 IO depths : 1=3.0%, 2=6.4%, 4=16.9%, 8=63.9%, 16=9.7%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=1848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename0: (groupid=0, jobs=1): err= 0: pid=100438: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=193, BW=773KiB/s (792kB/s)(7752KiB/10023msec) 00:27:45.769 slat (usec): min=4, max=4620, avg=24.61, stdev=228.84 00:27:45.769 clat (msec): min=28, max=190, avg=82.56, stdev=26.73 00:27:45.769 lat (msec): min=28, max=190, avg=82.58, stdev=26.73 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 63], 00:27:45.769 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 84], 00:27:45.769 | 70.00th=[ 93], 80.00th=[ 105], 90.00th=[ 118], 95.00th=[ 136], 00:27:45.769 | 99.00th=[ 165], 99.50th=[ 171], 99.90th=[ 192], 99.95th=[ 192], 00:27:45.769 | 99.99th=[ 192] 00:27:45.769 bw ( KiB/s): min= 512, max= 1128, per=3.87%, avg=768.30, stdev=158.09, samples=20 00:27:45.769 iops : min= 128, max= 282, avg=192.05, stdev=39.53, samples=20 00:27:45.769 lat (msec) : 50=11.56%, 100=64.29%, 250=24.15% 00:27:45.769 cpu : usr=43.33%, sys=0.89%, ctx=1192, majf=0, minf=9 00:27:45.769 IO depths : 1=2.8%, 2=5.9%, 4=15.0%, 8=66.1%, 16=10.2%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=91.4%, 8=3.4%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename1: (groupid=0, jobs=1): err= 0: pid=100439: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=241, BW=966KiB/s (989kB/s)(9664KiB/10004msec) 00:27:45.769 slat (usec): min=7, max=4019, avg=15.87, stdev=125.24 00:27:45.769 clat (msec): min=3, max=192, avg=66.15, stdev=24.62 00:27:45.769 lat (msec): min=3, max=192, avg=66.17, stdev=24.62 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 5], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:27:45.769 | 30.00th=[ 53], 40.00th=[ 56], 50.00th=[ 64], 60.00th=[ 71], 00:27:45.769 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 108], 00:27:45.769 | 99.00th=[ 126], 99.50th=[ 148], 99.90th=[ 192], 99.95th=[ 192], 00:27:45.769 | 99.99th=[ 192] 00:27:45.769 bw ( KiB/s): min= 560, max= 1667, per=4.82%, avg=956.00, stdev=219.40, samples=19 00:27:45.769 iops : min= 140, max= 416, avg=238.89, stdev=54.71, samples=19 00:27:45.769 lat (msec) : 4=0.66%, 10=1.99%, 20=0.66%, 50=22.14%, 100=64.78% 00:27:45.769 lat (msec) : 250=9.77% 00:27:45.769 cpu : usr=44.04%, sys=1.03%, ctx=1539, majf=0, minf=9 00:27:45.769 IO depths : 1=1.9%, 2=4.2%, 4=12.9%, 8=70.1%, 16=10.8%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=2416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename1: (groupid=0, jobs=1): err= 0: pid=100440: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=194, BW=780KiB/s (799kB/s)(7820KiB/10028msec) 00:27:45.769 slat (usec): min=6, max=4029, avg=16.88, stdev=128.52 00:27:45.769 clat (msec): min=34, max=203, avg=81.93, stdev=28.01 00:27:45.769 lat (msec): min=34, max=203, avg=81.95, stdev=28.01 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:27:45.769 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:27:45.769 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 121], 95.00th=[ 136], 00:27:45.769 | 99.00th=[ 157], 99.50th=[ 161], 99.90th=[ 205], 99.95th=[ 205], 00:27:45.769 | 99.99th=[ 205] 00:27:45.769 bw ( KiB/s): min= 472, max= 1040, per=3.91%, avg=775.60, stdev=172.41, samples=20 00:27:45.769 iops : min= 118, max= 260, avg=193.90, stdev=43.10, samples=20 00:27:45.769 lat (msec) : 50=14.73%, 100=62.40%, 250=22.86% 00:27:45.769 cpu : usr=38.54%, sys=1.03%, ctx=1095, majf=0, minf=9 00:27:45.769 IO depths : 1=2.0%, 2=4.1%, 4=12.3%, 8=70.5%, 16=11.1%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=1955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename1: (groupid=0, jobs=1): err= 0: pid=100441: Mon May 13 18:40:59 2024 00:27:45.769 read: IOPS=200, BW=800KiB/s (820kB/s)(8028KiB/10031msec) 00:27:45.769 slat (usec): min=4, max=8030, avg=24.58, stdev=309.79 00:27:45.769 clat (msec): min=17, max=191, avg=79.75, stdev=26.26 00:27:45.769 lat (msec): min=17, max=191, avg=79.78, stdev=26.26 00:27:45.769 clat percentiles (msec): 00:27:45.769 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:27:45.769 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 84], 00:27:45.769 | 70.00th=[ 91], 80.00th=[ 96], 90.00th=[ 111], 95.00th=[ 132], 00:27:45.769 | 99.00th=[ 157], 99.50th=[ 192], 99.90th=[ 192], 99.95th=[ 192], 00:27:45.769 | 99.99th=[ 192] 00:27:45.769 bw ( KiB/s): min= 424, max= 1104, per=4.01%, avg=796.20, stdev=162.53, samples=20 00:27:45.769 iops : min= 106, max= 276, avg=199.00, stdev=40.63, samples=20 00:27:45.769 lat (msec) : 20=0.80%, 50=14.35%, 100=67.41%, 250=17.44% 00:27:45.769 cpu : usr=33.62%, sys=1.08%, ctx=975, majf=0, minf=9 00:27:45.769 IO depths : 1=1.6%, 2=3.4%, 4=12.2%, 8=71.0%, 16=11.9%, 32=0.0%, >=64=0.0% 00:27:45.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.769 issued rwts: total=2007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.769 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.769 filename1: (groupid=0, jobs=1): err= 0: pid=100442: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=236, BW=945KiB/s (967kB/s)(9464KiB/10019msec) 00:27:45.770 slat (usec): min=6, max=3817, avg=13.30, stdev=78.47 00:27:45.770 clat (msec): min=30, max=129, avg=67.62, stdev=19.69 00:27:45.770 lat (msec): min=30, max=129, avg=67.63, stdev=19.69 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 33], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 51], 00:27:45.770 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 65], 60.00th=[ 72], 00:27:45.770 | 70.00th=[ 78], 80.00th=[ 84], 90.00th=[ 93], 95.00th=[ 106], 00:27:45.770 | 99.00th=[ 123], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 130], 00:27:45.770 | 99.99th=[ 130] 00:27:45.770 bw ( KiB/s): min= 640, max= 1248, per=4.75%, avg=942.45, stdev=158.74, samples=20 00:27:45.770 iops : min= 160, max= 312, avg=235.60, stdev=39.68, samples=20 00:27:45.770 lat (msec) : 50=20.58%, 100=72.23%, 250=7.19% 00:27:45.770 cpu : usr=43.09%, sys=1.14%, ctx=1453, majf=0, minf=9 00:27:45.770 IO depths : 1=0.7%, 2=1.8%, 4=8.3%, 8=76.4%, 16=12.8%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2366,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename1: (groupid=0, jobs=1): err= 0: pid=100443: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=208, BW=836KiB/s (856kB/s)(8396KiB/10047msec) 00:27:45.770 slat (usec): min=4, max=9057, avg=44.46, stdev=469.99 00:27:45.770 clat (msec): min=23, max=191, avg=76.30, stdev=26.19 00:27:45.770 lat (msec): min=23, max=191, avg=76.35, stdev=26.19 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 31], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:27:45.770 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 78], 00:27:45.770 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 113], 95.00th=[ 124], 00:27:45.770 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 190], 99.95th=[ 192], 00:27:45.770 | 99.99th=[ 192] 00:27:45.770 bw ( KiB/s): min= 432, max= 1072, per=4.20%, avg=833.20, stdev=153.91, samples=20 00:27:45.770 iops : min= 108, max= 268, avg=208.30, stdev=38.48, samples=20 00:27:45.770 lat (msec) : 50=15.29%, 100=67.89%, 250=16.82% 00:27:45.770 cpu : usr=34.26%, sys=0.84%, ctx=986, majf=0, minf=9 00:27:45.770 IO depths : 1=1.3%, 2=3.1%, 4=11.8%, 8=71.8%, 16=11.9%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=90.3%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename1: (groupid=0, jobs=1): err= 0: pid=100444: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=232, BW=930KiB/s (953kB/s)(9336KiB/10034msec) 00:27:45.770 slat (usec): min=3, max=8028, avg=14.35, stdev=166.02 00:27:45.770 clat (msec): min=21, max=147, avg=68.67, stdev=20.27 00:27:45.770 lat (msec): min=21, max=147, avg=68.68, stdev=20.26 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 34], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 49], 00:27:45.770 | 30.00th=[ 57], 40.00th=[ 61], 50.00th=[ 72], 60.00th=[ 72], 00:27:45.770 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:27:45.770 | 99.00th=[ 128], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:27:45.770 | 99.99th=[ 148] 00:27:45.770 bw ( KiB/s): min= 640, max= 1200, per=4.68%, avg=929.05, stdev=153.93, samples=20 00:27:45.770 iops : min= 160, max= 300, avg=232.25, stdev=38.47, samples=20 00:27:45.770 lat (msec) : 50=24.25%, 100=69.24%, 250=6.51% 00:27:45.770 cpu : usr=33.33%, sys=0.81%, ctx=884, majf=0, minf=9 00:27:45.770 IO depths : 1=0.3%, 2=0.7%, 4=7.0%, 8=78.7%, 16=13.3%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=89.1%, 8=6.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2334,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename1: (groupid=0, jobs=1): err= 0: pid=100445: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=200, BW=802KiB/s (821kB/s)(8052KiB/10039msec) 00:27:45.770 slat (usec): min=6, max=8055, avg=32.08, stdev=368.02 00:27:45.770 clat (msec): min=19, max=185, avg=79.44, stdev=25.98 00:27:45.770 lat (msec): min=19, max=185, avg=79.48, stdev=25.98 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 38], 5.00th=[ 45], 10.00th=[ 50], 20.00th=[ 59], 00:27:45.770 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:27:45.770 | 70.00th=[ 91], 80.00th=[ 97], 90.00th=[ 116], 95.00th=[ 129], 00:27:45.770 | 99.00th=[ 146], 99.50th=[ 157], 99.90th=[ 186], 99.95th=[ 186], 00:27:45.770 | 99.99th=[ 186] 00:27:45.770 bw ( KiB/s): min= 464, max= 1200, per=4.02%, avg=798.55, stdev=170.81, samples=20 00:27:45.770 iops : min= 116, max= 300, avg=199.60, stdev=42.72, samples=20 00:27:45.770 lat (msec) : 20=0.79%, 50=10.13%, 100=71.14%, 250=17.93% 00:27:45.770 cpu : usr=34.34%, sys=0.72%, ctx=969, majf=0, minf=9 00:27:45.770 IO depths : 1=1.6%, 2=3.3%, 4=11.3%, 8=72.0%, 16=11.8%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=90.1%, 8=5.1%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename1: (groupid=0, jobs=1): err= 0: pid=100446: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=200, BW=803KiB/s (822kB/s)(8052KiB/10028msec) 00:27:45.770 slat (usec): min=4, max=8040, avg=27.27, stdev=290.44 00:27:45.770 clat (msec): min=35, max=178, avg=79.50, stdev=25.99 00:27:45.770 lat (msec): min=35, max=178, avg=79.52, stdev=26.01 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 59], 00:27:45.770 | 30.00th=[ 68], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 81], 00:27:45.770 | 70.00th=[ 86], 80.00th=[ 96], 90.00th=[ 120], 95.00th=[ 131], 00:27:45.770 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 180], 00:27:45.770 | 99.99th=[ 180] 00:27:45.770 bw ( KiB/s): min= 512, max= 1120, per=4.04%, avg=801.20, stdev=150.44, samples=20 00:27:45.770 iops : min= 128, max= 280, avg=200.30, stdev=37.61, samples=20 00:27:45.770 lat (msec) : 50=10.93%, 100=72.13%, 250=16.94% 00:27:45.770 cpu : usr=32.29%, sys=0.74%, ctx=905, majf=0, minf=9 00:27:45.770 IO depths : 1=1.5%, 2=3.5%, 4=12.3%, 8=71.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2013,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename2: (groupid=0, jobs=1): err= 0: pid=100447: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=221, BW=887KiB/s (909kB/s)(8908KiB/10039msec) 00:27:45.770 slat (usec): min=5, max=8024, avg=14.74, stdev=169.86 00:27:45.770 clat (msec): min=5, max=132, avg=72.04, stdev=21.45 00:27:45.770 lat (msec): min=5, max=132, avg=72.06, stdev=21.45 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 8], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:27:45.770 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:27:45.770 | 70.00th=[ 84], 80.00th=[ 88], 90.00th=[ 96], 95.00th=[ 108], 00:27:45.770 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 133], 00:27:45.770 | 99.99th=[ 133] 00:27:45.770 bw ( KiB/s): min= 688, max= 1280, per=4.46%, avg=884.40, stdev=157.87, samples=20 00:27:45.770 iops : min= 172, max= 320, avg=221.10, stdev=39.47, samples=20 00:27:45.770 lat (msec) : 10=1.44%, 20=0.72%, 50=16.57%, 100=73.55%, 250=7.72% 00:27:45.770 cpu : usr=34.13%, sys=0.98%, ctx=908, majf=0, minf=9 00:27:45.770 IO depths : 1=1.1%, 2=2.4%, 4=9.6%, 8=74.9%, 16=12.0%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=89.8%, 8=5.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename2: (groupid=0, jobs=1): err= 0: pid=100448: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=181, BW=724KiB/s (742kB/s)(7256KiB/10016msec) 00:27:45.770 slat (usec): min=4, max=8048, avg=20.76, stdev=266.39 00:27:45.770 clat (msec): min=28, max=191, avg=88.19, stdev=27.55 00:27:45.770 lat (msec): min=28, max=191, avg=88.21, stdev=27.55 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 47], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 70], 00:27:45.770 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 82], 60.00th=[ 94], 00:27:45.770 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 130], 95.00th=[ 142], 00:27:45.770 | 99.00th=[ 167], 99.50th=[ 169], 99.90th=[ 192], 99.95th=[ 192], 00:27:45.770 | 99.99th=[ 192] 00:27:45.770 bw ( KiB/s): min= 384, max= 1024, per=3.62%, avg=719.20, stdev=148.33, samples=20 00:27:45.770 iops : min= 96, max= 256, avg=179.80, stdev=37.08, samples=20 00:27:45.770 lat (msec) : 50=7.28%, 100=66.76%, 250=25.96% 00:27:45.770 cpu : usr=32.28%, sys=0.87%, ctx=882, majf=0, minf=9 00:27:45.770 IO depths : 1=2.4%, 2=5.3%, 4=15.0%, 8=66.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=91.2%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=1814,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename2: (groupid=0, jobs=1): err= 0: pid=100449: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=215, BW=862KiB/s (882kB/s)(8636KiB/10023msec) 00:27:45.770 slat (nsec): min=5120, max=70861, avg=11209.44, stdev=5122.96 00:27:45.770 clat (msec): min=34, max=167, avg=74.20, stdev=23.12 00:27:45.770 lat (msec): min=34, max=167, avg=74.21, stdev=23.12 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:27:45.770 | 30.00th=[ 60], 40.00th=[ 67], 50.00th=[ 72], 60.00th=[ 75], 00:27:45.770 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 123], 00:27:45.770 | 99.00th=[ 133], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:27:45.770 | 99.99th=[ 169] 00:27:45.770 bw ( KiB/s): min= 560, max= 1144, per=4.32%, avg=857.20, stdev=145.68, samples=20 00:27:45.770 iops : min= 140, max= 286, avg=214.30, stdev=36.42, samples=20 00:27:45.770 lat (msec) : 50=15.89%, 100=71.01%, 250=13.11% 00:27:45.770 cpu : usr=43.25%, sys=1.08%, ctx=1254, majf=0, minf=9 00:27:45.770 IO depths : 1=1.9%, 2=4.2%, 4=13.0%, 8=70.0%, 16=10.9%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=90.6%, 8=4.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename2: (groupid=0, jobs=1): err= 0: pid=100450: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=186, BW=746KiB/s (764kB/s)(7460KiB/10005msec) 00:27:45.770 slat (usec): min=3, max=8031, avg=26.91, stdev=334.35 00:27:45.770 clat (msec): min=26, max=167, avg=85.65, stdev=24.98 00:27:45.770 lat (msec): min=26, max=167, avg=85.68, stdev=24.97 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 70], 00:27:45.770 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 85], 00:27:45.770 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:27:45.770 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 169], 99.95th=[ 169], 00:27:45.770 | 99.99th=[ 169] 00:27:45.770 bw ( KiB/s): min= 464, max= 976, per=3.71%, avg=737.95, stdev=148.92, samples=19 00:27:45.770 iops : min= 116, max= 244, avg=184.47, stdev=37.24, samples=19 00:27:45.770 lat (msec) : 50=5.84%, 100=70.99%, 250=23.16% 00:27:45.770 cpu : usr=33.72%, sys=0.90%, ctx=912, majf=0, minf=9 00:27:45.770 IO depths : 1=2.2%, 2=5.4%, 4=15.2%, 8=66.5%, 16=10.7%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=91.6%, 8=3.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=1865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename2: (groupid=0, jobs=1): err= 0: pid=100451: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=215, BW=864KiB/s (885kB/s)(8660KiB/10024msec) 00:27:45.770 slat (usec): min=4, max=4054, avg=17.95, stdev=153.94 00:27:45.770 clat (msec): min=28, max=208, avg=73.86, stdev=29.20 00:27:45.770 lat (msec): min=28, max=208, avg=73.88, stdev=29.20 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 34], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 50], 00:27:45.770 | 30.00th=[ 56], 40.00th=[ 62], 50.00th=[ 70], 60.00th=[ 74], 00:27:45.770 | 70.00th=[ 83], 80.00th=[ 92], 90.00th=[ 111], 95.00th=[ 124], 00:27:45.770 | 99.00th=[ 192], 99.50th=[ 209], 99.90th=[ 209], 99.95th=[ 209], 00:27:45.770 | 99.99th=[ 209] 00:27:45.770 bw ( KiB/s): min= 344, max= 1248, per=4.33%, avg=859.65, stdev=215.42, samples=20 00:27:45.770 iops : min= 86, max= 312, avg=214.90, stdev=53.85, samples=20 00:27:45.770 lat (msec) : 50=21.34%, 100=63.37%, 250=15.29% 00:27:45.770 cpu : usr=39.97%, sys=1.03%, ctx=1449, majf=0, minf=9 00:27:45.770 IO depths : 1=0.9%, 2=2.2%, 4=9.1%, 8=74.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:27:45.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 complete : 0=0.0%, 4=89.9%, 8=5.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.770 issued rwts: total=2165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.770 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.770 filename2: (groupid=0, jobs=1): err= 0: pid=100452: Mon May 13 18:40:59 2024 00:27:45.770 read: IOPS=203, BW=814KiB/s (834kB/s)(8152KiB/10009msec) 00:27:45.770 slat (usec): min=5, max=4035, avg=13.72, stdev=89.30 00:27:45.770 clat (msec): min=35, max=183, avg=78.46, stdev=26.39 00:27:45.770 lat (msec): min=35, max=183, avg=78.48, stdev=26.39 00:27:45.770 clat percentiles (msec): 00:27:45.770 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 55], 00:27:45.770 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 80], 00:27:45.770 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 118], 95.00th=[ 132], 00:27:45.770 | 99.00th=[ 150], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 184], 00:27:45.770 | 99.99th=[ 184] 00:27:45.770 bw ( KiB/s): min= 512, max= 1152, per=4.06%, avg=805.89, stdev=164.03, samples=19 00:27:45.771 iops : min= 128, max= 288, avg=201.47, stdev=41.01, samples=19 00:27:45.771 lat (msec) : 50=16.24%, 100=65.51%, 250=18.25% 00:27:45.771 cpu : usr=37.91%, sys=0.94%, ctx=1028, majf=0, minf=9 00:27:45.771 IO depths : 1=2.2%, 2=4.8%, 4=12.8%, 8=69.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:27:45.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.771 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.771 issued rwts: total=2038,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.771 filename2: (groupid=0, jobs=1): err= 0: pid=100453: Mon May 13 18:40:59 2024 00:27:45.771 read: IOPS=194, BW=779KiB/s (798kB/s)(7804KiB/10016msec) 00:27:45.771 slat (usec): min=4, max=8028, avg=26.21, stdev=326.90 00:27:45.771 clat (msec): min=17, max=182, avg=81.95, stdev=24.61 00:27:45.771 lat (msec): min=17, max=182, avg=81.98, stdev=24.63 00:27:45.771 clat percentiles (msec): 00:27:45.771 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 64], 00:27:45.771 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 84], 00:27:45.771 | 70.00th=[ 90], 80.00th=[ 100], 90.00th=[ 116], 95.00th=[ 123], 00:27:45.771 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 184], 99.95th=[ 184], 00:27:45.771 | 99.99th=[ 184] 00:27:45.771 bw ( KiB/s): min= 464, max= 976, per=3.92%, avg=777.60, stdev=133.86, samples=20 00:27:45.771 iops : min= 116, max= 244, avg=194.40, stdev=33.46, samples=20 00:27:45.771 lat (msec) : 20=0.36%, 50=8.00%, 100=72.83%, 250=18.81% 00:27:45.771 cpu : usr=41.08%, sys=0.86%, ctx=1249, majf=0, minf=9 00:27:45.771 IO depths : 1=1.7%, 2=3.6%, 4=11.1%, 8=71.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:27:45.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.771 complete : 0=0.0%, 4=90.4%, 8=5.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.771 issued rwts: total=1951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.771 filename2: (groupid=0, jobs=1): err= 0: pid=100454: Mon May 13 18:40:59 2024 00:27:45.771 read: IOPS=180, BW=724KiB/s (741kB/s)(7248KiB/10014msec) 00:27:45.771 slat (nsec): min=4755, max=62478, avg=11503.34, stdev=5787.21 00:27:45.771 clat (msec): min=24, max=203, avg=88.30, stdev=28.35 00:27:45.771 lat (msec): min=24, max=203, avg=88.31, stdev=28.35 00:27:45.771 clat percentiles (msec): 00:27:45.771 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 71], 00:27:45.771 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 88], 00:27:45.771 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 131], 95.00th=[ 157], 00:27:45.771 | 99.00th=[ 165], 99.50th=[ 165], 99.90th=[ 167], 99.95th=[ 205], 00:27:45.771 | 99.99th=[ 205] 00:27:45.771 bw ( KiB/s): min= 432, max= 976, per=3.63%, avg=720.20, stdev=161.38, samples=20 00:27:45.771 iops : min= 108, max= 244, avg=180.05, stdev=40.34, samples=20 00:27:45.771 lat (msec) : 50=5.41%, 100=70.36%, 250=24.23% 00:27:45.771 cpu : usr=32.31%, sys=0.83%, ctx=884, majf=0, minf=9 00:27:45.771 IO depths : 1=2.3%, 2=5.2%, 4=14.9%, 8=67.1%, 16=10.5%, 32=0.0%, >=64=0.0% 00:27:45.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.771 complete : 0=0.0%, 4=91.2%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:45.771 issued rwts: total=1812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:45.771 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:45.771 00:27:45.771 Run status group 0 (all jobs): 00:27:45.771 READ: bw=19.4MiB/s (20.3MB/s), 704KiB/s-1022KiB/s (721kB/s-1047kB/s), io=195MiB (204MB), run=10004-10047msec 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 bdev_null0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 [2024-05-13 18:41:00.118432] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 bdev_null1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.771 { 00:27:45.771 "params": { 00:27:45.771 "name": "Nvme$subsystem", 00:27:45.771 "trtype": "$TEST_TRANSPORT", 00:27:45.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.771 "adrfam": "ipv4", 00:27:45.771 "trsvcid": "$NVMF_PORT", 00:27:45.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.771 "hdgst": ${hdgst:-false}, 00:27:45.771 "ddgst": ${ddgst:-false} 00:27:45.771 }, 00:27:45.771 "method": "bdev_nvme_attach_controller" 00:27:45.771 } 00:27:45.771 EOF 00:27:45.771 )") 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:45.771 { 00:27:45.771 "params": { 00:27:45.771 "name": "Nvme$subsystem", 00:27:45.771 "trtype": "$TEST_TRANSPORT", 00:27:45.771 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.771 "adrfam": "ipv4", 00:27:45.771 "trsvcid": "$NVMF_PORT", 00:27:45.771 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.771 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.771 "hdgst": ${hdgst:-false}, 00:27:45.771 "ddgst": ${ddgst:-false} 00:27:45.771 }, 00:27:45.771 "method": "bdev_nvme_attach_controller" 00:27:45.771 } 00:27:45.771 EOF 00:27:45.771 )") 00:27:45.771 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:45.772 "params": { 00:27:45.772 "name": "Nvme0", 00:27:45.772 "trtype": "tcp", 00:27:45.772 "traddr": "10.0.0.2", 00:27:45.772 "adrfam": "ipv4", 00:27:45.772 "trsvcid": "4420", 00:27:45.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:45.772 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:45.772 "hdgst": false, 00:27:45.772 "ddgst": false 00:27:45.772 }, 00:27:45.772 "method": "bdev_nvme_attach_controller" 00:27:45.772 },{ 00:27:45.772 "params": { 00:27:45.772 "name": "Nvme1", 00:27:45.772 "trtype": "tcp", 00:27:45.772 "traddr": "10.0.0.2", 00:27:45.772 "adrfam": "ipv4", 00:27:45.772 "trsvcid": "4420", 00:27:45.772 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:45.772 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:45.772 "hdgst": false, 00:27:45.772 "ddgst": false 00:27:45.772 }, 00:27:45.772 "method": "bdev_nvme_attach_controller" 00:27:45.772 }' 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:45.772 18:41:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:45.772 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:45.772 ... 00:27:45.772 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:45.772 ... 00:27:45.772 fio-3.35 00:27:45.772 Starting 4 threads 00:27:51.040 00:27:51.040 filename0: (groupid=0, jobs=1): err= 0: pid=100575: Mon May 13 18:41:06 2024 00:27:51.040 read: IOPS=1960, BW=15.3MiB/s (16.1MB/s)(76.6MiB/5003msec) 00:27:51.040 slat (nsec): min=7382, max=97408, avg=9644.08, stdev=3844.26 00:27:51.040 clat (usec): min=1303, max=4828, avg=4031.63, stdev=156.70 00:27:51.040 lat (usec): min=1321, max=4844, avg=4041.28, stdev=156.19 00:27:51.040 clat percentiles (usec): 00:27:51.040 | 1.00th=[ 3621], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 3982], 00:27:51.040 | 30.00th=[ 4015], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:27:51.040 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4178], 00:27:51.040 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 4752], 99.95th=[ 4817], 00:27:51.040 | 99.99th=[ 4817] 00:27:51.040 bw ( KiB/s): min=15488, max=15968, per=25.12%, avg=15726.22, stdev=156.04, samples=9 00:27:51.040 iops : min= 1936, max= 1996, avg=1965.78, stdev=19.50, samples=9 00:27:51.040 lat (msec) : 2=0.16%, 4=23.79%, 10=76.05% 00:27:51.040 cpu : usr=93.28%, sys=5.50%, ctx=7, majf=0, minf=0 00:27:51.040 IO depths : 1=11.1%, 2=25.0%, 4=50.0%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.040 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.040 issued rwts: total=9808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.040 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.040 filename0: (groupid=0, jobs=1): err= 0: pid=100576: Mon May 13 18:41:06 2024 00:27:51.040 read: IOPS=1958, BW=15.3MiB/s (16.0MB/s)(76.5MiB/5001msec) 00:27:51.040 slat (nsec): min=7533, max=57201, avg=14264.55, stdev=5233.69 00:27:51.040 clat (usec): min=1107, max=6153, avg=4018.78, stdev=156.17 00:27:51.040 lat (usec): min=1115, max=6179, avg=4033.05, stdev=155.76 00:27:51.040 clat percentiles (usec): 00:27:51.040 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3949], 00:27:51.040 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4015], 00:27:51.040 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4113], 95.00th=[ 4146], 00:27:51.040 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 5080], 99.95th=[ 6128], 00:27:51.040 | 99.99th=[ 6128] 00:27:51.040 bw ( KiB/s): min=15488, max=15872, per=25.06%, avg=15687.11, stdev=129.77, samples=9 00:27:51.040 iops : min= 1936, max= 1984, avg=1960.89, stdev=16.22, samples=9 00:27:51.040 lat (msec) : 2=0.08%, 4=41.13%, 10=58.79% 00:27:51.040 cpu : usr=93.76%, sys=5.14%, ctx=21, majf=0, minf=9 00:27:51.040 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.040 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.040 issued rwts: total=9792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.040 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.040 filename1: (groupid=0, jobs=1): err= 0: pid=100577: Mon May 13 18:41:06 2024 00:27:51.040 read: IOPS=1956, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5002msec) 00:27:51.040 slat (nsec): min=4727, max=54120, avg=11125.58, stdev=4377.88 00:27:51.040 clat (usec): min=2057, max=7171, avg=4045.18, stdev=161.89 00:27:51.040 lat (usec): min=2065, max=7185, avg=4056.31, stdev=161.68 00:27:51.040 clat percentiles (usec): 00:27:51.040 | 1.00th=[ 3490], 5.00th=[ 3949], 10.00th=[ 3982], 20.00th=[ 3982], 00:27:51.040 | 30.00th=[ 4015], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4047], 00:27:51.040 | 70.00th=[ 4080], 80.00th=[ 4080], 90.00th=[ 4146], 95.00th=[ 4178], 00:27:51.040 | 99.00th=[ 4555], 99.50th=[ 4686], 99.90th=[ 5014], 99.95th=[ 7111], 00:27:51.040 | 99.99th=[ 7177] 00:27:51.040 bw ( KiB/s): min=15440, max=15824, per=25.06%, avg=15690.56, stdev=126.53, samples=9 00:27:51.040 iops : min= 1930, max= 1978, avg=1961.22, stdev=15.86, samples=9 00:27:51.040 lat (msec) : 4=24.88%, 10=75.12% 00:27:51.040 cpu : usr=94.32%, sys=4.44%, ctx=20, majf=0, minf=0 00:27:51.040 IO depths : 1=6.3%, 2=13.1%, 4=61.9%, 8=18.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.040 complete : 0=0.0%, 4=89.5%, 8=10.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.040 issued rwts: total=9784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.040 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.040 filename1: (groupid=0, jobs=1): err= 0: pid=100578: Mon May 13 18:41:06 2024 00:27:51.040 read: IOPS=1953, BW=15.3MiB/s (16.0MB/s)(76.3MiB/5001msec) 00:27:51.040 slat (nsec): min=4922, max=75751, avg=14629.33, stdev=6081.12 00:27:51.040 clat (usec): min=1891, max=10251, avg=4018.82, stdev=231.98 00:27:51.040 lat (usec): min=1905, max=10265, avg=4033.45, stdev=232.21 00:27:51.040 clat percentiles (usec): 00:27:51.041 | 1.00th=[ 3884], 5.00th=[ 3916], 10.00th=[ 3949], 20.00th=[ 3949], 00:27:51.041 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4015], 00:27:51.041 | 70.00th=[ 4047], 80.00th=[ 4047], 90.00th=[ 4080], 95.00th=[ 4146], 00:27:51.041 | 99.00th=[ 4293], 99.50th=[ 4490], 99.90th=[ 6980], 99.95th=[10159], 00:27:51.041 | 99.99th=[10290] 00:27:51.041 bw ( KiB/s): min=15488, max=15872, per=25.01%, avg=15655.11, stdev=144.69, samples=9 00:27:51.041 iops : min= 1936, max= 1984, avg=1956.89, stdev=18.09, samples=9 00:27:51.041 lat (msec) : 2=0.01%, 4=46.41%, 10=53.50%, 20=0.08% 00:27:51.041 cpu : usr=94.16%, sys=4.44%, ctx=14, majf=0, minf=9 00:27:51.041 IO depths : 1=11.5%, 2=25.0%, 4=50.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.041 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.041 issued rwts: total=9768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.041 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.041 00:27:51.041 Run status group 0 (all jobs): 00:27:51.041 READ: bw=61.1MiB/s (64.1MB/s), 15.3MiB/s-15.3MiB/s (16.0MB/s-16.1MB/s), io=306MiB (321MB), run=5001-5003msec 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 ************************************ 00:27:51.041 END TEST fio_dif_rand_params 00:27:51.041 ************************************ 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 00:27:51.041 real 0m23.840s 00:27:51.041 user 2m6.292s 00:27:51.041 sys 0m5.039s 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 18:41:06 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:51.041 18:41:06 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:27:51.041 18:41:06 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 ************************************ 00:27:51.041 START TEST fio_dif_digest 00:27:51.041 ************************************ 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 bdev_null0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:51.041 [2024-05-13 18:41:06.404417] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.041 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:51.041 { 00:27:51.041 "params": { 00:27:51.041 "name": "Nvme$subsystem", 00:27:51.041 "trtype": "$TEST_TRANSPORT", 00:27:51.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.041 "adrfam": "ipv4", 00:27:51.041 "trsvcid": "$NVMF_PORT", 00:27:51.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.041 "hdgst": ${hdgst:-false}, 00:27:51.041 "ddgst": ${ddgst:-false} 00:27:51.041 }, 00:27:51.041 "method": "bdev_nvme_attach_controller" 00:27:51.041 } 00:27:51.041 EOF 00:27:51.041 )") 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:51.042 "params": { 00:27:51.042 "name": "Nvme0", 00:27:51.042 "trtype": "tcp", 00:27:51.042 "traddr": "10.0.0.2", 00:27:51.042 "adrfam": "ipv4", 00:27:51.042 "trsvcid": "4420", 00:27:51.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.042 "hdgst": true, 00:27:51.042 "ddgst": true 00:27:51.042 }, 00:27:51.042 "method": "bdev_nvme_attach_controller" 00:27:51.042 }' 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:51.042 18:41:06 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.042 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:51.042 ... 00:27:51.042 fio-3.35 00:27:51.042 Starting 3 threads 00:28:03.323 00:28:03.323 filename0: (groupid=0, jobs=1): err= 0: pid=100684: Mon May 13 18:41:17 2024 00:28:03.323 read: IOPS=207, BW=26.0MiB/s (27.3MB/s)(260MiB/10004msec) 00:28:03.323 slat (nsec): min=7550, max=77484, avg=20031.88, stdev=7879.67 00:28:03.323 clat (usec): min=7398, max=55296, avg=14401.83, stdev=2960.30 00:28:03.323 lat (usec): min=7420, max=55307, avg=14421.86, stdev=2960.91 00:28:03.323 clat percentiles (usec): 00:28:03.323 | 1.00th=[ 8356], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[12256], 00:28:03.323 | 30.00th=[14222], 40.00th=[14746], 50.00th=[15139], 60.00th=[15533], 00:28:03.323 | 70.00th=[15926], 80.00th=[16319], 90.00th=[16909], 95.00th=[17171], 00:28:03.323 | 99.00th=[18220], 99.50th=[18482], 99.90th=[50070], 99.95th=[54264], 00:28:03.323 | 99.99th=[55313] 00:28:03.323 bw ( KiB/s): min=23808, max=30208, per=34.38%, avg=26567.37, stdev=1650.63, samples=19 00:28:03.323 iops : min= 186, max= 236, avg=207.53, stdev=12.91, samples=19 00:28:03.323 lat (msec) : 10=13.85%, 20=86.01%, 100=0.14% 00:28:03.323 cpu : usr=93.22%, sys=5.03%, ctx=13, majf=0, minf=0 00:28:03.323 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.323 issued rwts: total=2080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.323 filename0: (groupid=0, jobs=1): err= 0: pid=100685: Mon May 13 18:41:17 2024 00:28:03.323 read: IOPS=186, BW=23.3MiB/s (24.5MB/s)(234MiB/10004msec) 00:28:03.323 slat (nsec): min=7750, max=87390, avg=17963.49, stdev=8889.69 00:28:03.323 clat (usec): min=4768, max=21256, avg=16038.62, stdev=2926.09 00:28:03.323 lat (usec): min=4780, max=21313, avg=16056.59, stdev=2926.87 00:28:03.323 clat percentiles (usec): 00:28:03.323 | 1.00th=[ 8586], 5.00th=[10290], 10.00th=[10683], 20.00th=[13304], 00:28:03.323 | 30.00th=[16319], 40.00th=[16712], 50.00th=[17171], 60.00th=[17433], 00:28:03.323 | 70.00th=[17695], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:28:03.323 | 99.00th=[20055], 99.50th=[20579], 99.90th=[21103], 99.95th=[21365], 00:28:03.323 | 99.99th=[21365] 00:28:03.323 bw ( KiB/s): min=20736, max=27392, per=30.91%, avg=23889.00, stdev=1804.55, samples=19 00:28:03.323 iops : min= 162, max= 214, avg=186.58, stdev=14.11, samples=19 00:28:03.323 lat (msec) : 10=2.78%, 20=96.31%, 50=0.91% 00:28:03.323 cpu : usr=94.89%, sys=3.67%, ctx=13, majf=0, minf=0 00:28:03.323 IO depths : 1=8.6%, 2=91.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.323 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.323 filename0: (groupid=0, jobs=1): err= 0: pid=100686: Mon May 13 18:41:17 2024 00:28:03.323 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(262MiB/10005msec) 00:28:03.323 slat (nsec): min=5169, max=79421, avg=17255.20, stdev=8169.39 00:28:03.323 clat (usec): min=8522, max=56838, avg=14324.92, stdev=8688.06 00:28:03.323 lat (usec): min=8535, max=56859, avg=14342.17, stdev=8687.98 00:28:03.323 clat percentiles (usec): 00:28:03.323 | 1.00th=[10290], 5.00th=[10945], 10.00th=[11207], 20.00th=[11731], 00:28:03.323 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:28:03.323 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13829], 95.00th=[16188], 00:28:03.323 | 99.00th=[54264], 99.50th=[54789], 99.90th=[55837], 99.95th=[56361], 00:28:03.323 | 99.99th=[56886] 00:28:03.323 bw ( KiB/s): min=20224, max=31744, per=34.72%, avg=26827.47, stdev=3139.54, samples=19 00:28:03.323 iops : min= 158, max= 248, avg=209.53, stdev=24.49, samples=19 00:28:03.323 lat (msec) : 10=0.29%, 20=94.98%, 50=0.10%, 100=4.64% 00:28:03.323 cpu : usr=94.41%, sys=4.19%, ctx=17, majf=0, minf=0 00:28:03.323 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.323 issued rwts: total=2092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.323 00:28:03.323 Run status group 0 (all jobs): 00:28:03.323 READ: bw=75.5MiB/s (79.1MB/s), 23.3MiB/s-26.1MiB/s (24.5MB/s-27.4MB/s), io=755MiB (792MB), run=10004-10005msec 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.323 ************************************ 00:28:03.323 END TEST fio_dif_digest 00:28:03.323 ************************************ 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.323 00:28:03.323 real 0m11.050s 00:28:03.323 user 0m28.941s 00:28:03.323 sys 0m1.585s 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.323 18:41:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:03.323 18:41:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:03.323 18:41:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.323 rmmod nvme_tcp 00:28:03.323 rmmod nvme_fabrics 00:28:03.323 rmmod nvme_keyring 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 99938 ']' 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 99938 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 99938 ']' 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 99938 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 99938 00:28:03.323 killing process with pid 99938 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 99938' 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@965 -- # kill 99938 00:28:03.323 [2024-05-13 18:41:17.589147] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:03.323 18:41:17 nvmf_dif -- common/autotest_common.sh@970 -- # wait 99938 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:03.323 18:41:17 nvmf_dif -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:03.323 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:03.323 Waiting for block devices as requested 00:28:03.323 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:03.323 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:03.323 18:41:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:03.323 18:41:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:03.323 18:41:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:03.323 18:41:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:03.323 18:41:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.323 18:41:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:03.323 18:41:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.323 18:41:18 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:03.323 00:28:03.323 real 1m0.397s 00:28:03.323 user 3m52.292s 00:28:03.323 sys 0m14.630s 00:28:03.323 18:41:18 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:03.323 ************************************ 00:28:03.323 END TEST nvmf_dif 00:28:03.323 ************************************ 00:28:03.323 18:41:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:03.323 18:41:18 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:03.323 18:41:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:03.323 18:41:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:03.323 18:41:18 -- common/autotest_common.sh@10 -- # set +x 00:28:03.323 ************************************ 00:28:03.323 START TEST nvmf_abort_qd_sizes 00:28:03.323 ************************************ 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:03.323 * Looking for test storage... 00:28:03.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ virt != virt ]] 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ no == yes ]] 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@423 -- # [[ virt == phy ]] 00:28:03.323 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@426 -- # [[ virt == phy-fallback ]] 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@431 -- # [[ tcp == tcp ]] 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # nvmf_veth_init 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:03.324 Cannot find device "nvmf_tgt_br" 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # true 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:03.324 Cannot find device "nvmf_tgt_br2" 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # true 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:03.324 Cannot find device "nvmf_tgt_br" 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # true 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:03.324 Cannot find device "nvmf_tgt_br2" 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # true 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:03.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:03.324 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:03.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:28:03.324 00:28:03.324 --- 10.0.0.2 ping statistics --- 00:28:03.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.324 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:03.324 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:03.324 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:28:03.324 00:28:03.324 --- 10.0.0.3 ping statistics --- 00:28:03.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.324 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:03.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:28:03.324 00:28:03.324 --- 10.0.0.1 ping statistics --- 00:28:03.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.324 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@433 -- # return 0 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:03.324 18:41:18 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:03.889 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:03.889 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:03.889 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:04.146 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=101284 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 101284 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 101284 ']' 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:04.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:04.147 18:41:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:04.147 [2024-05-13 18:41:19.918640] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:28:04.147 [2024-05-13 18:41:19.918720] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.147 [2024-05-13 18:41:20.056980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.404 [2024-05-13 18:41:20.183755] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.404 [2024-05-13 18:41:20.184095] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.404 [2024-05-13 18:41:20.184270] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.404 [2024-05-13 18:41:20.184433] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.404 [2024-05-13 18:41:20.184484] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.404 [2024-05-13 18:41:20.184769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.404 [2024-05-13 18:41:20.186303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.404 [2024-05-13 18:41:20.186480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.404 [2024-05-13 18:41:20.186489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n '' ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@295 -- # local bdf= 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@230 -- # local class 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@231 -- # local subclass 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@232 -- # local progif 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # printf %02x 1 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # class=01 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # printf %02x 8 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # subclass=08 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # printf %02x 2 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # progif=02 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # hash lspci 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@239 -- # lspci -mm -n -D 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # grep -i -- -p02 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # tr -d '"' 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # local i 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@22 -- # [[ -z '' ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@24 -- # return 0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:05.338 18:41:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:05.338 ************************************ 00:28:05.338 START TEST spdk_target_abort 00:28:05.338 ************************************ 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.338 spdk_targetn1 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.338 [2024-05-13 18:41:21.091063] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:05.338 [2024-05-13 18:41:21.127033] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:05.338 [2024-05-13 18:41:21.127475] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:05.338 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:05.339 18:41:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:08.616 Initializing NVMe Controllers 00:28:08.616 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:08.616 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:08.616 Initialization complete. Launching workers. 00:28:08.616 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11176, failed: 0 00:28:08.616 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1062, failed to submit 10114 00:28:08.616 success 760, unsuccess 302, failed 0 00:28:08.616 18:41:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:08.616 18:41:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:11.901 Initializing NVMe Controllers 00:28:11.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:11.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:11.901 Initialization complete. Launching workers. 00:28:11.901 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6059, failed: 0 00:28:11.901 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1259, failed to submit 4800 00:28:11.901 success 268, unsuccess 991, failed 0 00:28:11.901 18:41:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:11.901 18:41:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:15.186 Initializing NVMe Controllers 00:28:15.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:15.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:15.186 Initialization complete. Launching workers. 00:28:15.186 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30742, failed: 0 00:28:15.186 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2644, failed to submit 28098 00:28:15.186 success 471, unsuccess 2173, failed 0 00:28:15.186 18:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:15.186 18:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.186 18:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.186 18:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.186 18:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:15.186 18:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:15.186 18:41:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 101284 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 101284 ']' 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 101284 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 101284 00:28:15.754 killing process with pid 101284 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 101284' 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 101284 00:28:15.754 [2024-05-13 18:41:31.590471] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:15.754 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 101284 00:28:16.013 ************************************ 00:28:16.013 END TEST spdk_target_abort 00:28:16.013 ************************************ 00:28:16.013 00:28:16.013 real 0m10.860s 00:28:16.013 user 0m44.227s 00:28:16.013 sys 0m1.744s 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.013 18:41:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:16.013 18:41:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:16.013 18:41:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:16.013 18:41:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:16.013 ************************************ 00:28:16.013 START TEST kernel_target_abort 00:28:16.013 ************************************ 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:16.013 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:16.273 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:16.273 18:41:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:16.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:16.532 Waiting for block devices as requested 00:28:16.532 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:16.532 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:16.790 No valid GPT data, bailing 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n2 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n2 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:16.790 No valid GPT data, bailing 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n2 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n3 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n3 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:28:16.790 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:17.049 No valid GPT data, bailing 00:28:17.049 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:17.049 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:17.049 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:17.049 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n3 00:28:17.049 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:17.050 No valid GPT data, bailing 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 --hostid=3bc393d8-a7d9-4548-bfca-2924fac86a61 -a 10.0.0.1 -t tcp -s 4420 00:28:17.050 00:28:17.050 Discovery Log Number of Records 2, Generation counter 2 00:28:17.050 =====Discovery Log Entry 0====== 00:28:17.050 trtype: tcp 00:28:17.050 adrfam: ipv4 00:28:17.050 subtype: current discovery subsystem 00:28:17.050 treq: not specified, sq flow control disable supported 00:28:17.050 portid: 1 00:28:17.050 trsvcid: 4420 00:28:17.050 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:17.050 traddr: 10.0.0.1 00:28:17.050 eflags: none 00:28:17.050 sectype: none 00:28:17.050 =====Discovery Log Entry 1====== 00:28:17.050 trtype: tcp 00:28:17.050 adrfam: ipv4 00:28:17.050 subtype: nvme subsystem 00:28:17.050 treq: not specified, sq flow control disable supported 00:28:17.050 portid: 1 00:28:17.050 trsvcid: 4420 00:28:17.050 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:17.050 traddr: 10.0.0.1 00:28:17.050 eflags: none 00:28:17.050 sectype: none 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:17.050 18:41:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:20.332 Initializing NVMe Controllers 00:28:20.332 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:20.332 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:20.332 Initialization complete. Launching workers. 00:28:20.332 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33181, failed: 0 00:28:20.332 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33181, failed to submit 0 00:28:20.332 success 0, unsuccess 33181, failed 0 00:28:20.332 18:41:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:20.332 18:41:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.615 Initializing NVMe Controllers 00:28:23.615 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:23.615 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:23.615 Initialization complete. Launching workers. 00:28:23.615 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67754, failed: 0 00:28:23.615 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28993, failed to submit 38761 00:28:23.615 success 0, unsuccess 28993, failed 0 00:28:23.615 18:41:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.615 18:41:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:26.901 Initializing NVMe Controllers 00:28:26.901 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:26.901 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:26.901 Initialization complete. Launching workers. 00:28:26.901 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 84467, failed: 0 00:28:26.901 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21136, failed to submit 63331 00:28:26.901 success 0, unsuccess 21136, failed 0 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:26.901 18:41:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:27.161 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:28.535 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:28.535 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:28.535 00:28:28.535 real 0m12.350s 00:28:28.535 user 0m6.297s 00:28:28.535 sys 0m3.435s 00:28:28.535 18:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:28.535 18:41:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:28.535 ************************************ 00:28:28.535 END TEST kernel_target_abort 00:28:28.535 ************************************ 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.535 rmmod nvme_tcp 00:28:28.535 rmmod nvme_fabrics 00:28:28.535 rmmod nvme_keyring 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.535 Process with pid 101284 is not found 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 101284 ']' 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 101284 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 101284 ']' 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 101284 00:28:28.535 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (101284) - No such process 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 101284 is not found' 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:28:28.535 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:29.102 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:29.102 Waiting for block devices as requested 00:28:29.102 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:29.102 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:29.102 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:29.102 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:29.102 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.102 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:29.102 18:41:44 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.102 18:41:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:29.102 18:41:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.102 18:41:45 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:29.102 00:28:29.102 real 0m26.442s 00:28:29.102 user 0m51.665s 00:28:29.102 sys 0m6.538s 00:28:29.102 ************************************ 00:28:29.102 END TEST nvmf_abort_qd_sizes 00:28:29.102 ************************************ 00:28:29.102 18:41:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:29.102 18:41:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:29.360 18:41:45 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:29.360 18:41:45 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:28:29.360 18:41:45 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:28:29.360 18:41:45 -- common/autotest_common.sh@10 -- # set +x 00:28:29.360 ************************************ 00:28:29.360 START TEST keyring_file 00:28:29.360 ************************************ 00:28:29.360 18:41:45 keyring_file -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:29.360 * Looking for test storage... 00:28:29.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:29.360 18:41:45 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:29.360 18:41:45 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:3bc393d8-a7d9-4548-bfca-2924fac86a61 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=3bc393d8-a7d9-4548-bfca-2924fac86a61 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:29.360 18:41:45 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:29.360 18:41:45 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:29.360 18:41:45 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:29.360 18:41:45 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.360 18:41:45 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.360 18:41:45 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.360 18:41:45 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:29.360 18:41:45 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@47 -- # : 0 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:29.360 18:41:45 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:29.360 18:41:45 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:29.360 18:41:45 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:29.360 18:41:45 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:29.360 18:41:45 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:29.360 18:41:45 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.i5xTUaQBZU 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.i5xTUaQBZU 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.i5xTUaQBZU 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.i5xTUaQBZU 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5bsHXEvWmZ 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:29.361 18:41:45 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5bsHXEvWmZ 00:28:29.361 18:41:45 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5bsHXEvWmZ 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.5bsHXEvWmZ 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@30 -- # tgtpid=102158 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:29.361 18:41:45 keyring_file -- keyring/file.sh@32 -- # waitforlisten 102158 00:28:29.361 18:41:45 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 102158 ']' 00:28:29.361 18:41:45 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.361 18:41:45 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:29.361 18:41:45 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.361 18:41:45 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:29.361 18:41:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:29.618 [2024-05-13 18:41:45.337031] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:28:29.618 [2024-05-13 18:41:45.337368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102158 ] 00:28:29.618 [2024-05-13 18:41:45.476368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.876 [2024-05-13 18:41:45.612671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:28:30.440 18:41:46 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:30.440 [2024-05-13 18:41:46.299262] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.440 null0 00:28:30.440 [2024-05-13 18:41:46.331182] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:30.440 [2024-05-13 18:41:46.331273] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:30.440 [2024-05-13 18:41:46.331570] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:30.440 [2024-05-13 18:41:46.339211] tcp.c:3657:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.440 18:41:46 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:30.440 [2024-05-13 18:41:46.351215] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:30.440 request: 00:28:30.440 2024/05/13 18:41:46 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:28:30.440 { 00:28:30.440 "method": "nvmf_subsystem_add_listener", 00:28:30.440 "params": { 00:28:30.440 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:30.440 "secure_channel": false, 00:28:30.440 "listen_address": { 00:28:30.440 "trtype": "tcp", 00:28:30.440 "traddr": "127.0.0.1", 00:28:30.440 "trsvcid": "4420" 00:28:30.440 } 00:28:30.440 } 00:28:30.440 } 00:28:30.440 Got JSON-RPC error response 00:28:30.440 GoRPCClient: error on JSON-RPC call 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:30.440 18:41:46 keyring_file -- keyring/file.sh@46 -- # bperfpid=102193 00:28:30.440 18:41:46 keyring_file -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:30.440 18:41:46 keyring_file -- keyring/file.sh@48 -- # waitforlisten 102193 /var/tmp/bperf.sock 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 102193 ']' 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:30.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:30.440 18:41:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:30.698 [2024-05-13 18:41:46.411223] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:28:30.698 [2024-05-13 18:41:46.411482] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102193 ] 00:28:30.698 [2024-05-13 18:41:46.548673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.957 [2024-05-13 18:41:46.683621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.523 18:41:47 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:31.523 18:41:47 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:28:31.523 18:41:47 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:31.523 18:41:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:31.781 18:41:47 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5bsHXEvWmZ 00:28:31.781 18:41:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5bsHXEvWmZ 00:28:32.039 18:41:47 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:28:32.039 18:41:47 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:28:32.039 18:41:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:32.039 18:41:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:32.039 18:41:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:32.604 18:41:48 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.i5xTUaQBZU == \/\t\m\p\/\t\m\p\.\i\5\x\T\U\a\Q\B\Z\U ]] 00:28:32.604 18:41:48 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:28:32.604 18:41:48 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:32.604 18:41:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:32.604 18:41:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:32.604 18:41:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:32.862 18:41:48 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5bsHXEvWmZ == \/\t\m\p\/\t\m\p\.\5\b\s\H\X\E\v\W\m\Z ]] 00:28:32.862 18:41:48 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:28:32.862 18:41:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:32.862 18:41:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:32.862 18:41:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:32.862 18:41:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:32.862 18:41:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:33.151 18:41:48 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:28:33.151 18:41:48 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:28:33.151 18:41:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:33.151 18:41:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:33.151 18:41:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.151 18:41:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:33.151 18:41:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:33.410 18:41:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:33.410 18:41:49 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:33.410 18:41:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:33.668 [2024-05-13 18:41:49.388488] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:33.668 nvme0n1 00:28:33.668 18:41:49 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:28:33.668 18:41:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:33.668 18:41:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:33.668 18:41:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.668 18:41:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:33.668 18:41:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:33.926 18:41:49 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:28:33.926 18:41:49 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:28:33.926 18:41:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:33.926 18:41:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:33.926 18:41:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:33.926 18:41:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:33.926 18:41:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:34.184 18:41:50 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:28:34.184 18:41:50 keyring_file -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.442 Running I/O for 1 seconds... 00:28:35.378 00:28:35.378 Latency(us) 00:28:35.378 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.378 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:35.378 nvme0n1 : 1.01 10316.63 40.30 0.00 0.00 12364.81 4438.57 17992.61 00:28:35.378 =================================================================================================================== 00:28:35.378 Total : 10316.63 40.30 0.00 0.00 12364.81 4438.57 17992.61 00:28:35.378 0 00:28:35.378 18:41:51 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:35.378 18:41:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:35.637 18:41:51 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:28:35.637 18:41:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:35.637 18:41:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:35.637 18:41:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:35.637 18:41:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:35.637 18:41:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:35.895 18:41:51 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:28:35.895 18:41:51 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:28:35.895 18:41:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:35.895 18:41:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:35.895 18:41:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:35.895 18:41:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:35.895 18:41:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:36.154 18:41:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:36.154 18:41:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:36.154 18:41:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:36.154 18:41:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:36.154 18:41:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:36.154 18:41:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.154 18:41:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:36.154 18:41:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:36.154 18:41:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:36.154 18:41:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:36.461 [2024-05-13 18:41:52.266641] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:36.461 [2024-05-13 18:41:52.266795] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fe6a0 (107): Transport endpoint is not connected 00:28:36.461 [2024-05-13 18:41:52.267792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10fe6a0 (9): Bad file descriptor 00:28:36.461 [2024-05-13 18:41:52.268788] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:36.461 [2024-05-13 18:41:52.268820] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:36.461 [2024-05-13 18:41:52.268832] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:36.461 2024/05/13 18:41:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:28:36.461 request: 00:28:36.461 { 00:28:36.461 "method": "bdev_nvme_attach_controller", 00:28:36.461 "params": { 00:28:36.461 "name": "nvme0", 00:28:36.461 "trtype": "tcp", 00:28:36.461 "traddr": "127.0.0.1", 00:28:36.461 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.461 "adrfam": "ipv4", 00:28:36.461 "trsvcid": "4420", 00:28:36.461 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.461 "psk": "key1" 00:28:36.461 } 00:28:36.461 } 00:28:36.461 Got JSON-RPC error response 00:28:36.461 GoRPCClient: error on JSON-RPC call 00:28:36.461 18:41:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:36.461 18:41:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:36.461 18:41:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:36.461 18:41:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:36.461 18:41:52 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:28:36.461 18:41:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:36.461 18:41:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:36.461 18:41:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:36.461 18:41:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:36.461 18:41:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:36.726 18:41:52 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:28:36.726 18:41:52 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:28:36.726 18:41:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:36.726 18:41:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:36.726 18:41:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:36.726 18:41:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:36.726 18:41:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:36.984 18:41:52 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:36.984 18:41:52 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:28:36.984 18:41:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:37.242 18:41:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:28:37.242 18:41:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:37.500 18:41:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:28:37.500 18:41:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:37.500 18:41:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:28:37.759 18:41:53 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:28:37.759 18:41:53 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.i5xTUaQBZU 00:28:37.759 18:41:53 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:37.759 18:41:53 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:37.759 18:41:53 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:37.759 18:41:53 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:37.759 18:41:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:37.759 18:41:53 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:37.759 18:41:53 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:37.759 18:41:53 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:37.759 18:41:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:38.017 [2024-05-13 18:41:53.763932] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.i5xTUaQBZU': 0100660 00:28:38.017 [2024-05-13 18:41:53.764003] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:38.017 2024/05/13 18:41:53 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.i5xTUaQBZU], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:28:38.017 request: 00:28:38.017 { 00:28:38.017 "method": "keyring_file_add_key", 00:28:38.017 "params": { 00:28:38.017 "name": "key0", 00:28:38.017 "path": "/tmp/tmp.i5xTUaQBZU" 00:28:38.017 } 00:28:38.017 } 00:28:38.017 Got JSON-RPC error response 00:28:38.017 GoRPCClient: error on JSON-RPC call 00:28:38.017 18:41:53 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:38.017 18:41:53 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:38.017 18:41:53 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:38.017 18:41:53 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:38.017 18:41:53 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.i5xTUaQBZU 00:28:38.017 18:41:53 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:38.017 18:41:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i5xTUaQBZU 00:28:38.274 18:41:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.i5xTUaQBZU 00:28:38.274 18:41:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:28:38.274 18:41:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:38.274 18:41:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:38.274 18:41:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:38.274 18:41:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:38.274 18:41:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:38.533 18:41:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:28:38.533 18:41:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:38.533 18:41:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:28:38.533 18:41:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:38.533 18:41:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:38.533 18:41:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:38.533 18:41:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:38.533 18:41:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:38.533 18:41:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:38.533 18:41:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:38.792 [2024-05-13 18:41:54.564182] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.i5xTUaQBZU': No such file or directory 00:28:38.792 [2024-05-13 18:41:54.564287] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:38.792 [2024-05-13 18:41:54.564328] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:38.792 [2024-05-13 18:41:54.564342] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:38.792 [2024-05-13 18:41:54.564353] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:38.792 2024/05/13 18:41:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:28:38.792 request: 00:28:38.792 { 00:28:38.792 "method": "bdev_nvme_attach_controller", 00:28:38.792 "params": { 00:28:38.792 "name": "nvme0", 00:28:38.792 "trtype": "tcp", 00:28:38.792 "traddr": "127.0.0.1", 00:28:38.792 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.792 "adrfam": "ipv4", 00:28:38.792 "trsvcid": "4420", 00:28:38.792 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.792 "psk": "key0" 00:28:38.792 } 00:28:38.792 } 00:28:38.792 Got JSON-RPC error response 00:28:38.792 GoRPCClient: error on JSON-RPC call 00:28:38.792 18:41:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:28:38.792 18:41:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:38.792 18:41:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:38.792 18:41:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:38.792 18:41:54 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:28:38.792 18:41:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:39.051 18:41:54 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.iSLHuFGK9D 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:39.051 18:41:54 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:39.051 18:41:54 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:28:39.051 18:41:54 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:39.051 18:41:54 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:39.051 18:41:54 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:28:39.051 18:41:54 keyring_file -- nvmf/common.sh@705 -- # python - 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.iSLHuFGK9D 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.iSLHuFGK9D 00:28:39.051 18:41:54 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.iSLHuFGK9D 00:28:39.051 18:41:54 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iSLHuFGK9D 00:28:39.051 18:41:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iSLHuFGK9D 00:28:39.309 18:41:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:39.309 18:41:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:39.567 nvme0n1 00:28:39.567 18:41:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:28:39.567 18:41:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:39.567 18:41:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:39.567 18:41:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:39.567 18:41:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:39.567 18:41:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:39.834 18:41:55 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:28:39.834 18:41:55 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:28:39.834 18:41:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:40.110 18:41:55 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:40.110 18:41:55 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:40.110 18:41:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:40.110 18:41:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.110 18:41:55 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.369 18:41:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:40.369 18:41:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:40.369 18:41:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:40.369 18:41:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:40.369 18:41:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:40.369 18:41:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:40.369 18:41:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:40.627 18:41:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:40.627 18:41:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:40.627 18:41:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:40.885 18:41:56 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:40.885 18:41:56 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:40.885 18:41:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:41.143 18:41:57 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:41.143 18:41:57 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.iSLHuFGK9D 00:28:41.143 18:41:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.iSLHuFGK9D 00:28:41.402 18:41:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.5bsHXEvWmZ 00:28:41.402 18:41:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.5bsHXEvWmZ 00:28:41.660 18:41:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:41.660 18:41:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:41.918 nvme0n1 00:28:41.918 18:41:57 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:41.918 18:41:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:42.486 18:41:58 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:42.486 "subsystems": [ 00:28:42.486 { 00:28:42.486 "subsystem": "keyring", 00:28:42.486 "config": [ 00:28:42.486 { 00:28:42.486 "method": "keyring_file_add_key", 00:28:42.486 "params": { 00:28:42.486 "name": "key0", 00:28:42.486 "path": "/tmp/tmp.iSLHuFGK9D" 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "keyring_file_add_key", 00:28:42.486 "params": { 00:28:42.486 "name": "key1", 00:28:42.486 "path": "/tmp/tmp.5bsHXEvWmZ" 00:28:42.486 } 00:28:42.486 } 00:28:42.486 ] 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "subsystem": "iobuf", 00:28:42.486 "config": [ 00:28:42.486 { 00:28:42.486 "method": "iobuf_set_options", 00:28:42.486 "params": { 00:28:42.486 "large_bufsize": 135168, 00:28:42.486 "large_pool_count": 1024, 00:28:42.486 "small_bufsize": 8192, 00:28:42.486 "small_pool_count": 8192 00:28:42.486 } 00:28:42.486 } 00:28:42.486 ] 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "subsystem": "sock", 00:28:42.486 "config": [ 00:28:42.486 { 00:28:42.486 "method": "sock_impl_set_options", 00:28:42.486 "params": { 00:28:42.486 "enable_ktls": false, 00:28:42.486 "enable_placement_id": 0, 00:28:42.486 "enable_quickack": false, 00:28:42.486 "enable_recv_pipe": true, 00:28:42.486 "enable_zerocopy_send_client": false, 00:28:42.486 "enable_zerocopy_send_server": true, 00:28:42.486 "impl_name": "posix", 00:28:42.486 "recv_buf_size": 2097152, 00:28:42.486 "send_buf_size": 2097152, 00:28:42.486 "tls_version": 0, 00:28:42.486 "zerocopy_threshold": 0 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "sock_impl_set_options", 00:28:42.486 "params": { 00:28:42.486 "enable_ktls": false, 00:28:42.486 "enable_placement_id": 0, 00:28:42.486 "enable_quickack": false, 00:28:42.486 "enable_recv_pipe": true, 00:28:42.486 "enable_zerocopy_send_client": false, 00:28:42.486 "enable_zerocopy_send_server": true, 00:28:42.486 "impl_name": "ssl", 00:28:42.486 "recv_buf_size": 4096, 00:28:42.486 "send_buf_size": 4096, 00:28:42.486 "tls_version": 0, 00:28:42.486 "zerocopy_threshold": 0 00:28:42.486 } 00:28:42.486 } 00:28:42.486 ] 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "subsystem": "vmd", 00:28:42.486 "config": [] 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "subsystem": "accel", 00:28:42.486 "config": [ 00:28:42.486 { 00:28:42.486 "method": "accel_set_options", 00:28:42.486 "params": { 00:28:42.486 "buf_count": 2048, 00:28:42.486 "large_cache_size": 16, 00:28:42.486 "sequence_count": 2048, 00:28:42.486 "small_cache_size": 128, 00:28:42.486 "task_count": 2048 00:28:42.486 } 00:28:42.486 } 00:28:42.486 ] 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "subsystem": "bdev", 00:28:42.486 "config": [ 00:28:42.486 { 00:28:42.486 "method": "bdev_set_options", 00:28:42.486 "params": { 00:28:42.486 "bdev_auto_examine": true, 00:28:42.486 "bdev_io_cache_size": 256, 00:28:42.486 "bdev_io_pool_size": 65535, 00:28:42.486 "iobuf_large_cache_size": 16, 00:28:42.486 "iobuf_small_cache_size": 128 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "bdev_raid_set_options", 00:28:42.486 "params": { 00:28:42.486 "process_window_size_kb": 1024 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "bdev_iscsi_set_options", 00:28:42.486 "params": { 00:28:42.486 "timeout_sec": 30 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "bdev_nvme_set_options", 00:28:42.486 "params": { 00:28:42.486 "action_on_timeout": "none", 00:28:42.486 "allow_accel_sequence": false, 00:28:42.486 "arbitration_burst": 0, 00:28:42.486 "bdev_retry_count": 3, 00:28:42.486 "ctrlr_loss_timeout_sec": 0, 00:28:42.486 "delay_cmd_submit": true, 00:28:42.486 "dhchap_dhgroups": [ 00:28:42.486 "null", 00:28:42.486 "ffdhe2048", 00:28:42.486 "ffdhe3072", 00:28:42.486 "ffdhe4096", 00:28:42.486 "ffdhe6144", 00:28:42.486 "ffdhe8192" 00:28:42.486 ], 00:28:42.486 "dhchap_digests": [ 00:28:42.486 "sha256", 00:28:42.486 "sha384", 00:28:42.486 "sha512" 00:28:42.486 ], 00:28:42.486 "disable_auto_failback": false, 00:28:42.486 "fast_io_fail_timeout_sec": 0, 00:28:42.486 "generate_uuids": false, 00:28:42.486 "high_priority_weight": 0, 00:28:42.486 "io_path_stat": false, 00:28:42.486 "io_queue_requests": 512, 00:28:42.486 "keep_alive_timeout_ms": 10000, 00:28:42.486 "low_priority_weight": 0, 00:28:42.486 "medium_priority_weight": 0, 00:28:42.486 "nvme_adminq_poll_period_us": 10000, 00:28:42.486 "nvme_error_stat": false, 00:28:42.486 "nvme_ioq_poll_period_us": 0, 00:28:42.486 "rdma_cm_event_timeout_ms": 0, 00:28:42.486 "rdma_max_cq_size": 0, 00:28:42.486 "rdma_srq_size": 0, 00:28:42.486 "reconnect_delay_sec": 0, 00:28:42.486 "timeout_admin_us": 0, 00:28:42.486 "timeout_us": 0, 00:28:42.486 "transport_ack_timeout": 0, 00:28:42.486 "transport_retry_count": 4, 00:28:42.486 "transport_tos": 0 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "bdev_nvme_attach_controller", 00:28:42.486 "params": { 00:28:42.486 "adrfam": "IPv4", 00:28:42.486 "ctrlr_loss_timeout_sec": 0, 00:28:42.486 "ddgst": false, 00:28:42.486 "fast_io_fail_timeout_sec": 0, 00:28:42.486 "hdgst": false, 00:28:42.486 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:42.486 "name": "nvme0", 00:28:42.486 "prchk_guard": false, 00:28:42.486 "prchk_reftag": false, 00:28:42.486 "psk": "key0", 00:28:42.486 "reconnect_delay_sec": 0, 00:28:42.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.486 "traddr": "127.0.0.1", 00:28:42.486 "trsvcid": "4420", 00:28:42.486 "trtype": "TCP" 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "bdev_nvme_set_hotplug", 00:28:42.486 "params": { 00:28:42.486 "enable": false, 00:28:42.486 "period_us": 100000 00:28:42.486 } 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "method": "bdev_wait_for_examine" 00:28:42.486 } 00:28:42.486 ] 00:28:42.486 }, 00:28:42.486 { 00:28:42.486 "subsystem": "nbd", 00:28:42.486 "config": [] 00:28:42.486 } 00:28:42.486 ] 00:28:42.486 }' 00:28:42.486 18:41:58 keyring_file -- keyring/file.sh@114 -- # killprocess 102193 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 102193 ']' 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@950 -- # kill -0 102193 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@951 -- # uname 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102193 00:28:42.487 killing process with pid 102193 00:28:42.487 Received shutdown signal, test time was about 1.000000 seconds 00:28:42.487 00:28:42.487 Latency(us) 00:28:42.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.487 =================================================================================================================== 00:28:42.487 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102193' 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@965 -- # kill 102193 00:28:42.487 18:41:58 keyring_file -- common/autotest_common.sh@970 -- # wait 102193 00:28:42.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:42.747 18:41:58 keyring_file -- keyring/file.sh@117 -- # bperfpid=102666 00:28:42.747 18:41:58 keyring_file -- keyring/file.sh@119 -- # waitforlisten 102666 /var/tmp/bperf.sock 00:28:42.747 18:41:58 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 102666 ']' 00:28:42.747 18:41:58 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:42.747 18:41:58 keyring_file -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:42.747 18:41:58 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:42.747 18:41:58 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:42.747 "subsystems": [ 00:28:42.747 { 00:28:42.747 "subsystem": "keyring", 00:28:42.747 "config": [ 00:28:42.747 { 00:28:42.747 "method": "keyring_file_add_key", 00:28:42.747 "params": { 00:28:42.747 "name": "key0", 00:28:42.747 "path": "/tmp/tmp.iSLHuFGK9D" 00:28:42.747 } 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "method": "keyring_file_add_key", 00:28:42.747 "params": { 00:28:42.747 "name": "key1", 00:28:42.747 "path": "/tmp/tmp.5bsHXEvWmZ" 00:28:42.747 } 00:28:42.747 } 00:28:42.747 ] 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "subsystem": "iobuf", 00:28:42.747 "config": [ 00:28:42.747 { 00:28:42.747 "method": "iobuf_set_options", 00:28:42.747 "params": { 00:28:42.747 "large_bufsize": 135168, 00:28:42.747 "large_pool_count": 1024, 00:28:42.747 "small_bufsize": 8192, 00:28:42.747 "small_pool_count": 8192 00:28:42.747 } 00:28:42.747 } 00:28:42.747 ] 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "subsystem": "sock", 00:28:42.747 "config": [ 00:28:42.747 { 00:28:42.747 "method": "sock_impl_set_options", 00:28:42.747 "params": { 00:28:42.747 "enable_ktls": false, 00:28:42.747 "enable_placement_id": 0, 00:28:42.747 "enable_quickack": false, 00:28:42.747 "enable_recv_pipe": true, 00:28:42.747 "enable_zerocopy_send_client": false, 00:28:42.747 "enable_zerocopy_send_server": true, 00:28:42.747 "impl_name": "posix", 00:28:42.747 "recv_buf_size": 2097152, 00:28:42.747 "send_buf_size": 2097152, 00:28:42.747 "tls_version": 0, 00:28:42.747 "zerocopy_threshold": 0 00:28:42.747 } 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "method": "sock_impl_set_options", 00:28:42.747 "params": { 00:28:42.747 "enable_ktls": false, 00:28:42.747 "enable_placement_id": 0, 00:28:42.747 "enable_quickack": false, 00:28:42.747 "enable_recv_pipe": true, 00:28:42.747 "enable_zerocopy_send_client": false, 00:28:42.747 "enable_zerocopy_send_server": true, 00:28:42.747 "impl_name": "ssl", 00:28:42.747 "recv_buf_size": 4096, 00:28:42.747 "send_buf_size": 4096, 00:28:42.747 "tls_version": 0, 00:28:42.747 "zerocopy_threshold": 0 00:28:42.747 } 00:28:42.747 } 00:28:42.747 ] 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "subsystem": "vmd", 00:28:42.747 "config": [] 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "subsystem": "accel", 00:28:42.747 "config": [ 00:28:42.747 { 00:28:42.747 "method": "accel_set_options", 00:28:42.747 "params": { 00:28:42.747 "buf_count": 2048, 00:28:42.747 "large_cache_size": 16, 00:28:42.747 "sequence_count": 2048, 00:28:42.747 "small_cache_size": 128, 00:28:42.747 "task_count": 2048 00:28:42.747 } 00:28:42.747 } 00:28:42.747 ] 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "subsystem": "bdev", 00:28:42.747 "config": [ 00:28:42.747 { 00:28:42.747 "method": "bdev_set_options", 00:28:42.747 "params": { 00:28:42.747 "bdev_auto_examine": true, 00:28:42.747 "bdev_io_cache_size": 256, 00:28:42.747 "bdev_io_pool_size": 65535, 00:28:42.747 "iobuf_large_cache_size": 16, 00:28:42.747 "iobuf_small_cache_size": 128 00:28:42.747 } 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "method": "bdev_raid_set_options", 00:28:42.747 "params": { 00:28:42.747 "process_window_size_kb": 1024 00:28:42.747 } 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "method": "bdev_iscsi_set_options", 00:28:42.747 "params": { 00:28:42.747 "timeout_sec": 30 00:28:42.747 } 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "method": "bdev_nvme_set_options", 00:28:42.747 "params": { 00:28:42.747 "action_on_timeout": "none", 00:28:42.747 "allow_accel_sequence": false, 00:28:42.747 "arbitration_burst": 0, 00:28:42.747 "bdev_retry_count": 3, 00:28:42.747 "ctrlr_loss_timeout_sec": 0, 00:28:42.747 "delay_cmd_submit": true, 00:28:42.747 "dhchap_dhgroups": [ 00:28:42.747 "null", 00:28:42.747 "ffdhe2048", 00:28:42.747 "ffdhe3072", 00:28:42.747 "ffdhe4096", 00:28:42.747 "ffdhe6144", 00:28:42.747 "ffdhe8192" 00:28:42.747 ], 00:28:42.747 "dhchap_digests": [ 00:28:42.747 "sha256", 00:28:42.747 "sha384", 00:28:42.747 "sha512" 00:28:42.747 ], 00:28:42.747 "disable_auto_failback": false, 00:28:42.747 "fast_io_fail_timeout_sec": 0, 00:28:42.747 "generate_uuids": false, 00:28:42.747 "high_priority_weight": 0, 00:28:42.747 "io_path_stat": false, 00:28:42.747 "io_queue_requests": 512, 00:28:42.747 "keep_alive_timeout_ms": 10000, 00:28:42.747 "low_priority_weight": 0, 00:28:42.747 "medium_priority_weight": 0, 00:28:42.747 "nvme_adminq_poll_period_us": 10000, 00:28:42.747 "nvme_error_stat": false, 00:28:42.747 "nvme_ioq_poll_period_us": 0, 00:28:42.747 "rdma_cm_event_timeout_ms": 0, 00:28:42.747 "rdma_max_cq_size": 0, 00:28:42.747 "rdma_srq_size": 0, 00:28:42.747 "reconnect_delay_sec": 0, 00:28:42.747 "timeout_admin_us": 0, 00:28:42.747 "timeout_us": 0, 00:28:42.747 "transport_ack_timeout": 0, 00:28:42.747 "transport_retry_count": 4, 00:28:42.747 "transport_tos": 0 00:28:42.747 } 00:28:42.747 }, 00:28:42.747 { 00:28:42.747 "method": "bdev_nvme_attach_controller", 00:28:42.747 "params": { 00:28:42.747 "adrfam": "IPv4", 00:28:42.747 "ctrlr_loss_timeout_sec": 0, 00:28:42.747 "ddgst": false, 00:28:42.747 "fast_io_fail_timeout_sec": 0, 00:28:42.747 "hdgst": false, 00:28:42.747 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:42.747 "name": "nvme0", 00:28:42.747 "prchk_guard": false, 00:28:42.747 "prchk_reftag": false, 00:28:42.747 "psk": "key0", 00:28:42.747 "reconnect_delay_sec": 0, 00:28:42.747 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:42.748 "traddr": "127.0.0.1", 00:28:42.748 "trsvcid": "4420", 00:28:42.748 "trtype": "TCP" 00:28:42.748 } 00:28:42.748 }, 00:28:42.748 { 00:28:42.748 "method": "bdev_nvme_set_hotplug", 00:28:42.748 "params": { 00:28:42.748 "enable": false, 00:28:42.748 "period_us": 100000 00:28:42.748 } 00:28:42.748 }, 00:28:42.748 { 00:28:42.748 "method": "bdev_wait_for_examine" 00:28:42.748 } 00:28:42.748 ] 00:28:42.748 }, 00:28:42.748 { 00:28:42.748 "subsystem": "nbd", 00:28:42.748 "config": [] 00:28:42.748 } 00:28:42.748 ] 00:28:42.748 }' 00:28:42.748 18:41:58 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:42.748 18:41:58 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:42.748 18:41:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:42.748 [2024-05-13 18:41:58.680687] Starting SPDK v24.05-pre git sha1 b084cba07 / DPDK 23.11.0 initialization... 00:28:42.748 [2024-05-13 18:41:58.680785] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102666 ] 00:28:43.006 [2024-05-13 18:41:58.819755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.264 [2024-05-13 18:41:58.965703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.264 [2024-05-13 18:41:59.195601] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:43.831 18:41:59 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:43.831 18:41:59 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:28:43.831 18:41:59 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:43.831 18:41:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:43.831 18:41:59 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:44.090 18:41:59 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:44.090 18:41:59 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:44.090 18:41:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:44.090 18:41:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:44.090 18:41:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:44.090 18:41:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:44.090 18:41:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:44.349 18:42:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:44.349 18:42:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:44.349 18:42:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:44.349 18:42:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:44.349 18:42:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:44.349 18:42:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:44.349 18:42:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:44.608 18:42:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:44.608 18:42:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:44.608 18:42:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:44.608 18:42:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:44.866 18:42:00 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:44.866 18:42:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:44.866 18:42:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.iSLHuFGK9D /tmp/tmp.5bsHXEvWmZ 00:28:44.866 18:42:00 keyring_file -- keyring/file.sh@20 -- # killprocess 102666 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 102666 ']' 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@950 -- # kill -0 102666 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@951 -- # uname 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102666 00:28:44.866 killing process with pid 102666 00:28:44.866 Received shutdown signal, test time was about 1.000000 seconds 00:28:44.866 00:28:44.866 Latency(us) 00:28:44.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:44.866 =================================================================================================================== 00:28:44.866 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102666' 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@965 -- # kill 102666 00:28:44.866 18:42:00 keyring_file -- common/autotest_common.sh@970 -- # wait 102666 00:28:45.125 18:42:01 keyring_file -- keyring/file.sh@21 -- # killprocess 102158 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 102158 ']' 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@950 -- # kill -0 102158 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@951 -- # uname 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 102158 00:28:45.125 killing process with pid 102158 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 102158' 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@965 -- # kill 102158 00:28:45.125 [2024-05-13 18:42:01.042838] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:28:45.125 [2024-05-13 18:42:01.042885] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:45.125 18:42:01 keyring_file -- common/autotest_common.sh@970 -- # wait 102158 00:28:45.693 00:28:45.693 real 0m16.427s 00:28:45.693 user 0m40.540s 00:28:45.693 sys 0m3.481s 00:28:45.693 18:42:01 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:45.693 18:42:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 ************************************ 00:28:45.693 END TEST keyring_file 00:28:45.693 ************************************ 00:28:45.693 18:42:01 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:28:45.693 18:42:01 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:28:45.693 18:42:01 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:45.693 18:42:01 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:45.693 18:42:01 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:28:45.693 18:42:01 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:28:45.693 18:42:01 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:28:45.693 18:42:01 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:28:45.693 18:42:01 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:45.693 18:42:01 -- common/autotest_common.sh@10 -- # set +x 00:28:45.693 18:42:01 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:28:45.693 18:42:01 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:28:45.693 18:42:01 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:28:45.693 18:42:01 -- common/autotest_common.sh@10 -- # set +x 00:28:47.596 INFO: APP EXITING 00:28:47.596 INFO: killing all VMs 00:28:47.596 INFO: killing vhost app 00:28:47.596 INFO: EXIT DONE 00:28:47.854 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:48.113 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:48.113 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:48.680 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:48.680 Cleaning 00:28:48.680 Removing: /var/run/dpdk/spdk0/config 00:28:48.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:48.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:48.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:48.680 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:48.680 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:48.680 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:48.938 Removing: /var/run/dpdk/spdk1/config 00:28:48.938 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:48.938 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:48.938 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:48.938 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:48.938 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:48.938 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:48.938 Removing: /var/run/dpdk/spdk2/config 00:28:48.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:48.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:48.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:48.938 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:48.938 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:48.938 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:48.938 Removing: /var/run/dpdk/spdk3/config 00:28:48.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:48.938 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:48.939 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:48.939 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:48.939 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:48.939 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:48.939 Removing: /var/run/dpdk/spdk4/config 00:28:48.939 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:48.939 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:48.939 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:48.939 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:48.939 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:48.939 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:48.939 Removing: /dev/shm/nvmf_trace.0 00:28:48.939 Removing: /dev/shm/spdk_tgt_trace.pid60513 00:28:48.939 Removing: /var/run/dpdk/spdk0 00:28:48.939 Removing: /var/run/dpdk/spdk1 00:28:48.939 Removing: /var/run/dpdk/spdk2 00:28:48.939 Removing: /var/run/dpdk/spdk3 00:28:48.939 Removing: /var/run/dpdk/spdk4 00:28:48.939 Removing: /var/run/dpdk/spdk_pid100013 00:28:48.939 Removing: /var/run/dpdk/spdk_pid100171 00:28:48.939 Removing: /var/run/dpdk/spdk_pid100324 00:28:48.939 Removing: /var/run/dpdk/spdk_pid100418 00:28:48.939 Removing: /var/run/dpdk/spdk_pid100570 00:28:48.939 Removing: /var/run/dpdk/spdk_pid100676 00:28:48.939 Removing: /var/run/dpdk/spdk_pid101353 00:28:48.939 Removing: /var/run/dpdk/spdk_pid101383 00:28:48.939 Removing: /var/run/dpdk/spdk_pid101424 00:28:48.939 Removing: /var/run/dpdk/spdk_pid101671 00:28:48.939 Removing: /var/run/dpdk/spdk_pid101706 00:28:48.939 Removing: /var/run/dpdk/spdk_pid101736 00:28:48.939 Removing: /var/run/dpdk/spdk_pid102158 00:28:48.939 Removing: /var/run/dpdk/spdk_pid102193 00:28:48.939 Removing: /var/run/dpdk/spdk_pid102666 00:28:48.939 Removing: /var/run/dpdk/spdk_pid60368 00:28:48.939 Removing: /var/run/dpdk/spdk_pid60513 00:28:48.939 Removing: /var/run/dpdk/spdk_pid60774 00:28:48.939 Removing: /var/run/dpdk/spdk_pid60872 00:28:48.939 Removing: /var/run/dpdk/spdk_pid60906 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61021 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61051 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61169 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61444 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61614 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61696 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61788 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61878 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61916 00:28:48.939 Removing: /var/run/dpdk/spdk_pid61952 00:28:48.939 Removing: /var/run/dpdk/spdk_pid62013 00:28:48.939 Removing: /var/run/dpdk/spdk_pid62132 00:28:48.939 Removing: /var/run/dpdk/spdk_pid62763 00:28:48.939 Removing: /var/run/dpdk/spdk_pid62827 00:28:48.939 Removing: /var/run/dpdk/spdk_pid62896 00:28:48.939 Removing: /var/run/dpdk/spdk_pid62924 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63003 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63031 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63110 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63138 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63190 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63220 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63271 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63301 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63453 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63489 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63563 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63633 00:28:48.939 Removing: /var/run/dpdk/spdk_pid63657 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63721 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63756 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63790 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63825 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63859 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63894 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63934 00:28:49.198 Removing: /var/run/dpdk/spdk_pid63963 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64005 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64034 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64074 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64114 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64143 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64183 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64218 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64252 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64292 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64330 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64373 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64407 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64443 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64513 00:28:49.198 Removing: /var/run/dpdk/spdk_pid64624 00:28:49.198 Removing: /var/run/dpdk/spdk_pid65043 00:28:49.198 Removing: /var/run/dpdk/spdk_pid71772 00:28:49.198 Removing: /var/run/dpdk/spdk_pid72111 00:28:49.198 Removing: /var/run/dpdk/spdk_pid74515 00:28:49.198 Removing: /var/run/dpdk/spdk_pid74893 00:28:49.198 Removing: /var/run/dpdk/spdk_pid75157 00:28:49.198 Removing: /var/run/dpdk/spdk_pid75199 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76032 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76041 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76094 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76153 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76213 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76257 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76259 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76285 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76327 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76330 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76387 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76447 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76507 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76556 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76558 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76584 00:28:49.198 Removing: /var/run/dpdk/spdk_pid76880 00:28:49.198 Removing: /var/run/dpdk/spdk_pid77025 00:28:49.198 Removing: /var/run/dpdk/spdk_pid77282 00:28:49.198 Removing: /var/run/dpdk/spdk_pid77332 00:28:49.198 Removing: /var/run/dpdk/spdk_pid77689 00:28:49.198 Removing: /var/run/dpdk/spdk_pid78215 00:28:49.198 Removing: /var/run/dpdk/spdk_pid78657 00:28:49.198 Removing: /var/run/dpdk/spdk_pid79627 00:28:49.198 Removing: /var/run/dpdk/spdk_pid80610 00:28:49.198 Removing: /var/run/dpdk/spdk_pid80722 00:28:49.198 Removing: /var/run/dpdk/spdk_pid80792 00:28:49.198 Removing: /var/run/dpdk/spdk_pid82255 00:28:49.198 Removing: /var/run/dpdk/spdk_pid82487 00:28:49.198 Removing: /var/run/dpdk/spdk_pid82926 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83034 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83176 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83227 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83267 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83313 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83477 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83624 00:28:49.198 Removing: /var/run/dpdk/spdk_pid83898 00:28:49.198 Removing: /var/run/dpdk/spdk_pid84017 00:28:49.198 Removing: /var/run/dpdk/spdk_pid84271 00:28:49.198 Removing: /var/run/dpdk/spdk_pid84391 00:28:49.198 Removing: /var/run/dpdk/spdk_pid84530 00:28:49.198 Removing: /var/run/dpdk/spdk_pid84868 00:28:49.198 Removing: /var/run/dpdk/spdk_pid85250 00:28:49.198 Removing: /var/run/dpdk/spdk_pid85258 00:28:49.198 Removing: /var/run/dpdk/spdk_pid87497 00:28:49.198 Removing: /var/run/dpdk/spdk_pid87809 00:28:49.198 Removing: /var/run/dpdk/spdk_pid88305 00:28:49.198 Removing: /var/run/dpdk/spdk_pid88307 00:28:49.198 Removing: /var/run/dpdk/spdk_pid88648 00:28:49.198 Removing: /var/run/dpdk/spdk_pid88662 00:28:49.198 Removing: /var/run/dpdk/spdk_pid88676 00:28:49.198 Removing: /var/run/dpdk/spdk_pid88707 00:28:49.198 Removing: /var/run/dpdk/spdk_pid88713 00:28:49.457 Removing: /var/run/dpdk/spdk_pid88851 00:28:49.457 Removing: /var/run/dpdk/spdk_pid88864 00:28:49.457 Removing: /var/run/dpdk/spdk_pid88967 00:28:49.457 Removing: /var/run/dpdk/spdk_pid88969 00:28:49.457 Removing: /var/run/dpdk/spdk_pid89073 00:28:49.457 Removing: /var/run/dpdk/spdk_pid89075 00:28:49.457 Removing: /var/run/dpdk/spdk_pid89496 00:28:49.457 Removing: /var/run/dpdk/spdk_pid89542 00:28:49.457 Removing: /var/run/dpdk/spdk_pid89621 00:28:49.457 Removing: /var/run/dpdk/spdk_pid89670 00:28:49.457 Removing: /var/run/dpdk/spdk_pid90011 00:28:49.457 Removing: /var/run/dpdk/spdk_pid90260 00:28:49.457 Removing: /var/run/dpdk/spdk_pid90739 00:28:49.458 Removing: /var/run/dpdk/spdk_pid91320 00:28:49.458 Removing: /var/run/dpdk/spdk_pid92683 00:28:49.458 Removing: /var/run/dpdk/spdk_pid93268 00:28:49.458 Removing: /var/run/dpdk/spdk_pid93270 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95259 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95345 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95431 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95526 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95679 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95775 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95860 00:28:49.458 Removing: /var/run/dpdk/spdk_pid95955 00:28:49.458 Removing: /var/run/dpdk/spdk_pid96295 00:28:49.458 Removing: /var/run/dpdk/spdk_pid96985 00:28:49.458 Removing: /var/run/dpdk/spdk_pid98321 00:28:49.458 Removing: /var/run/dpdk/spdk_pid98527 00:28:49.458 Removing: /var/run/dpdk/spdk_pid98809 00:28:49.458 Removing: /var/run/dpdk/spdk_pid99110 00:28:49.458 Removing: /var/run/dpdk/spdk_pid99651 00:28:49.458 Removing: /var/run/dpdk/spdk_pid99656 00:28:49.458 Clean 00:28:49.458 18:42:05 -- common/autotest_common.sh@1447 -- # return 0 00:28:49.458 18:42:05 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:28:49.458 18:42:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.458 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:28:49.458 18:42:05 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:28:49.458 18:42:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:49.458 18:42:05 -- common/autotest_common.sh@10 -- # set +x 00:28:49.458 18:42:05 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:49.458 18:42:05 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:49.458 18:42:05 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:49.458 18:42:05 -- spdk/autotest.sh@389 -- # hash lcov 00:28:49.458 18:42:05 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:49.458 18:42:05 -- spdk/autotest.sh@391 -- # hostname 00:28:49.458 18:42:05 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:49.716 geninfo: WARNING: invalid characters removed from testname! 00:29:21.790 18:42:32 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:21.790 18:42:36 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:23.692 18:42:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:26.978 18:42:42 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:29.531 18:42:44 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:32.064 18:42:47 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:34.647 18:42:50 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:34.647 18:42:50 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:34.647 18:42:50 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:34.647 18:42:50 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:34.647 18:42:50 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:34.647 18:42:50 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.647 18:42:50 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.647 18:42:50 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.647 18:42:50 -- paths/export.sh@5 -- $ export PATH 00:29:34.647 18:42:50 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:34.647 18:42:50 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:34.647 18:42:50 -- common/autobuild_common.sh@437 -- $ date +%s 00:29:34.905 18:42:50 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715625770.XXXXXX 00:29:34.905 18:42:50 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715625770.pGp3sc 00:29:34.905 18:42:50 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:29:34.905 18:42:50 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:29:34.905 18:42:50 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:34.905 18:42:50 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:34.905 18:42:50 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:34.905 18:42:50 -- common/autobuild_common.sh@453 -- $ get_config_params 00:29:34.905 18:42:50 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:29:34.905 18:42:50 -- common/autotest_common.sh@10 -- $ set +x 00:29:34.905 18:42:50 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-avahi --with-golang' 00:29:34.905 18:42:50 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:29:34.905 18:42:50 -- pm/common@17 -- $ local monitor 00:29:34.905 18:42:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:34.905 18:42:50 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:34.905 18:42:50 -- pm/common@25 -- $ sleep 1 00:29:34.905 18:42:50 -- pm/common@21 -- $ date +%s 00:29:34.905 18:42:50 -- pm/common@21 -- $ date +%s 00:29:34.905 18:42:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715625770 00:29:34.905 18:42:50 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1715625770 00:29:34.905 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715625770_collect-vmstat.pm.log 00:29:34.906 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1715625770_collect-cpu-load.pm.log 00:29:35.841 18:42:51 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:35.841 18:42:51 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:35.841 18:42:51 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:35.841 18:42:51 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:35.841 18:42:51 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:35.841 18:42:51 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:35.841 18:42:51 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:35.841 18:42:51 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:35.841 18:42:51 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:35.841 18:42:51 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:35.841 18:42:51 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:35.841 18:42:51 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:35.841 18:42:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:35.841 18:42:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:35.841 18:42:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.841 18:42:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:35.841 18:42:51 -- pm/common@44 -- $ pid=104347 00:29:35.841 18:42:51 -- pm/common@50 -- $ kill -TERM 104347 00:29:35.841 18:42:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:35.841 18:42:51 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:35.841 18:42:51 -- pm/common@44 -- $ pid=104349 00:29:35.841 18:42:51 -- pm/common@50 -- $ kill -TERM 104349 00:29:35.841 + [[ -n 5260 ]] 00:29:35.841 + sudo kill 5260 00:29:35.852 [Pipeline] } 00:29:35.872 [Pipeline] // timeout 00:29:35.877 [Pipeline] } 00:29:35.896 [Pipeline] // stage 00:29:35.903 [Pipeline] } 00:29:35.921 [Pipeline] // catchError 00:29:35.932 [Pipeline] stage 00:29:35.939 [Pipeline] { (Stop VM) 00:29:35.954 [Pipeline] sh 00:29:36.234 + vagrant halt 00:29:40.423 ==> default: Halting domain... 00:29:46.992 [Pipeline] sh 00:29:47.268 + vagrant destroy -f 00:29:51.516 ==> default: Removing domain... 00:29:51.529 [Pipeline] sh 00:29:51.807 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest_2/output 00:29:51.816 [Pipeline] } 00:29:51.835 [Pipeline] // stage 00:29:51.840 [Pipeline] } 00:29:51.858 [Pipeline] // dir 00:29:51.863 [Pipeline] } 00:29:51.880 [Pipeline] // wrap 00:29:51.886 [Pipeline] } 00:29:51.902 [Pipeline] // catchError 00:29:51.911 [Pipeline] stage 00:29:51.913 [Pipeline] { (Epilogue) 00:29:51.926 [Pipeline] sh 00:29:52.206 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:58.776 [Pipeline] catchError 00:29:58.778 [Pipeline] { 00:29:58.794 [Pipeline] sh 00:29:59.072 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:59.072 Artifacts sizes are good 00:29:59.081 [Pipeline] } 00:29:59.097 [Pipeline] // catchError 00:29:59.109 [Pipeline] archiveArtifacts 00:29:59.116 Archiving artifacts 00:29:59.287 [Pipeline] cleanWs 00:29:59.298 [WS-CLEANUP] Deleting project workspace... 00:29:59.298 [WS-CLEANUP] Deferred wipeout is used... 00:29:59.304 [WS-CLEANUP] done 00:29:59.307 [Pipeline] } 00:29:59.325 [Pipeline] // stage 00:29:59.331 [Pipeline] } 00:29:59.347 [Pipeline] // node 00:29:59.352 [Pipeline] End of Pipeline 00:29:59.384 Finished: SUCCESS